Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring all markets related code #54

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

davidrpugh
Copy link
Member

@adrian-carro I have separated out all of the code related to market operations into a separate markets package. This exposed a number of issues with the structure of the housing package code. I am going to refactor the code in the markets package to reflect what I think is a better overall organization structure. When I am done, we can discuss this PR and you can decide whether or not you want to merge it.

@davidrpugh
Copy link
Member Author

@adrian-carro I was taking a look at the HousingMarket.java file and I noticed that there are no methods for removing bids or updating bids. Do buyers never explicitly remove or cancel outstanding bids to buy homes? Do buyers never update the prices of there bids?

@adrian-carro
Copy link
Member

Hi @davidrpugh, no, bidders never get to update their bids in the course of a given time step, and bids get deleted at the end of every time step, so there is no need for individual households removing their bids.

@davidrpugh
Copy link
Member Author

@adrian-carro If bidders never update their bids, then how are prices bid up in the event that a particular house get matched with multiple buyers?

@adrian-carro
Copy link
Member

@davidrpugh that's the seller who increases the price, not the buyer updating its bid.

@davidrpugh
Copy link
Member Author

@adrian-carro I am struggling to understand the principle behind the matching algorithm being used to match buyers and sellers in the market. I think it is a kind of deferred acceptance algorithm where a buyer is initially matched to the highest quality house that it can afford; if, after all buyers have had a chance to be matched, matches between one buyer and one seller are finalized; for matches between a single seller and multiple buyers, the seller than increases the price until only single buyer remains. Is this the idea?

@adrian-carro
Copy link
Member

@davidrpugh that's the basic idea, except that sellers matched with multiple bids get to bid up their price only once, and then they choose a random buyer among those who can still pay for it. In case none of them can, then the highest bidder gets selected. Hope this helps.

@davidrpugh
Copy link
Member Author

Thanks for confirming. Here is a sketch Scala implementation of the algorithm discussed above.

import scala.collection.immutable.HashMap
import scala.util.Random


// define stub implementations of key classes
case class House(quality: Long)
case class HouseSaleRecord(householdId: Long, house: House, listPrice: Long)
case class HouseBuyerRecord(householdId: Long, limitPrice: Long)

// source of randomness
val prng: Random = new Random(42)

// test input data (would be stored in the HouseSaleMarket instance). Could improve the efficiency by maintaining the sellerRecords sorted in ascending order by listPrice.
val buyerRecords: Set[HouseBuyerRecord] = Set(HouseBuyerRecord(1, 100), HouseBuyerRecord(2, 105), HouseBuyerRecord(4, 105))
val sellerRecords: Set[HouseSaleRecord] = Set(HouseSaleRecord(3, House(15), 102), HouseSaleRecord(5, House(5), 100))

// first we create a mapping between a seller and a set of potential buyers
val preliminaryMatches: HashMap[HouseSaleRecord, Set[HouseBuyerRecord]] = {
  buyerRecords.aggregate(HashMap.empty[HouseSaleRecord, Set[HouseBuyerRecord]])(
    { case (aggregatedMatches, buyerRecord) => 
      val affordable = sellerRecords.filter(_.listPrice <= buyerRecord.limitPrice)
      if (affordable.isEmpty) {
        aggregatedMatches
      } else {
        val highestQuality = affordable.maxBy(_.house.quality)
        val matchedBuyerRecords = aggregatedMatches.getOrElse(highestQuality, Set())
        aggregatedMatches.updated(highestQuality, matchedBuyerRecords + buyerRecord)
      }
    },
    { case (matches, additionalMatches) => 
      matches.merged(additionalMatches)({ case ((k1, v1), (k2, v2)) => (k1, v1 ++ v2) })
    }
  )  
}

preliminaryMatches  // Map(HouseSaleRecord(5,House(5),100) -> Set(HouseBuyerRecord(1,100)), HouseSaleRecord(3,House(15),102) -> Set(HouseBuyerRecord(2,105), HouseBuyerRecord(4,105))): scala.collection.immutable.HashMap

// then we finalize the matches
val finalizedMatches: HashMap[HouseSaleRecord, HouseBuyerRecord] = {
  preliminaryMatches.map {
    case (sellerRecord, buyerRecords) => 
    if (buyerRecords.size == 1) {
      (sellerRecord, buyerRecords.head)
    } else {  // this should really be an auction where the seller solicits sealed bids from the buyers!
      val shuffledBuyerRecords = prng.shuffle(buyerRecords)
      val randomBuyerRecord = shuffledBuyerRecords.head
      (sellerRecord, randomBuyerRecord)
    }
  }
}

finalizedMatches  //Map(HouseSaleRecord(5,House(5),100) -> HouseBuyerRecord(1,100), HouseSaleRecord(3,House(15),102) -> HouseBuyerRecord(2,105)): scala.collection.immutable.HashMap

At this point I think the matching process above should repeat until all possible matchings are exhausted. For example given the above input data the following are unmatched records.

val unMatchedSellerRecords: Set[HouseSaleRecord] = sellerRecords.diff(finalizedMatches.keySet)
unMatchedSellerRecords // Set(): scala.collection.immutable.Set


val unMatchedBuyerRecords: Set[HouseBuyerRecord] = buyerRecords.diff(finalizedMatches.values.toSet)
unMatchedBuyerRecords  // Set(HouseBuyerRecord(4,105)): scala.collection.immutable.Set

Running the above algorithm again on these inputs would yield an empty set of preliminary matches and thus the algorithm would terminate.

This algorithm, which is identical in spirit to the one currently implemented in the model, is basically a "deferred acceptance" algorithm. Most variants of deferred acceptance have been shown to converge to stable, but not necessarily optimal, matchings.

From the notes that have left in the HousingMarket.java class it appears that you are not entirely happy with the code. Is this still the case? Would you be open to replace the current algorithm?

@davidrpugh
Copy link
Member Author

davidrpugh commented Apr 19, 2018

@adrian-carro Just to be clear. I will write the algorithm in Java and not Scala, but it is easier for me to think about what the algorithm should do by writing it first in Scala.

Here is an updated version of the matching algorithm that terminates (without specifying the number of rounds as the current code does).

import scala.collection.immutable.HashMap
import scala.util.Random


case class House(quality: Long)
case class HouseSaleRecord(householdId: Long, house: House, listPrice: Long)
case class HouseBuyerRecord(householdId: Long, limitPrice: Long)

val prng: Random = new Random(142)

type BuyerRecords = Set[HouseBuyerRecord]
type SaleRecords = Set[HouseSaleRecord]
type MatchedRecords = HashMap[HouseSaleRecord, HouseBuyerRecord]

val buyerRecords: BuyerRecords = Set(HouseBuyerRecord(1, 100), HouseBuyerRecord(2, 105), HouseBuyerRecord(4, 105))
val sellerRecords: Set[HouseSaleRecord] = Set(HouseSaleRecord(3, House(15), 102), HouseSaleRecord(5, House(5), 100), HouseSaleRecord(6, House(4), 190))

def findPreliminaryMatches(buyerRecords: BuyerRecords, sellerRecords: SaleRecords): HashMap[HouseSaleRecord, BuyerRecords] = {
  buyerRecords.aggregate(HashMap.empty[HouseSaleRecord, BuyerRecords])(
    { case (aggregatedMatches, buyerRecord) => 
      val affordable = sellerRecords.filter(_.listPrice <= buyerRecord.limitPrice)
      if (affordable.isEmpty) {
        aggregatedMatches
      } else {
        val highestQuality = affordable.maxBy(_.house.quality)
        val matchedBuyerRecords = aggregatedMatches.getOrElse(highestQuality, Set())
        aggregatedMatches.updated(highestQuality, matchedBuyerRecords + buyerRecord)
      }
    },
    { case (matches, additionalMatches) => 
      matches.merged(additionalMatches)({ case ((k1, v1), (k2, v2)) => (k1, v1 ++ v2) })
    }
  )  
}

def finalizePreliminaryMatches(matches: HashMap[HouseSaleRecord, BuyerRecords]): MatchedRecords = {
  matches.map {
    case (sellerRecord, buyerRecords) => 
    if (buyerRecords.size == 1) {
      (sellerRecord, buyerRecords.head)
    } else {  // this can be lifted to a function that is passed as arg!
      val shuffledBuyerRecords = prng.shuffle(buyerRecords)
      val randomBuyerRecord = shuffledBuyerRecords.head
      (sellerRecord, randomBuyerRecord)
    }
  }
}

def findMatches(buyerRecords: BuyerRecords, saleRecords: SaleRecords): (BuyerRecords, Set[HouseSaleRecord], MatchedRecords) = {
  
  @annotation.tailrec
  def accummulate(unMatchedBuyerRecords: BuyerRecords, unMatchedSaleRecords: SaleRecords, matches: MatchedRecords): (BuyerRecords, SaleRecords, MatchedRecords) = {
    if (unMatchedBuyerRecords.isEmpty || unMatchedSaleRecords.isEmpty) {
      (unMatchedBuyerRecords, unMatchedSaleRecords, matches)
    } else { // might be more possible matches!
    	val preliminaryMatches = findPreliminaryMatches(unMatchedBuyerRecords, unMatchedSaleRecords)
      if (preliminaryMatches.isEmpty) {
        (unMatchedBuyerRecords, unMatchedSaleRecords, matches)
      } else {
  	  	val additionalMatches = finalizePreliminaryMatches(preliminaryMatches)
        val remainingUnMatchedBuyerRecords = unMatchedBuyerRecords.diff(additionalMatches.values.toSet)
        val remainingUnMatchedSaleRecords = unMatchedSaleRecords.diff(additionalMatches.keySet)
  			accummulate(remainingUnMatchedBuyerRecords, remainingUnMatchedSaleRecords, matches ++ additionalMatches)
      }
    }
  }
  accummulate(buyerRecords, saleRecords, HashMap.empty[HouseSaleRecord, HouseBuyerRecord])
}
  
val (unMatchedBuyerRecords, unMatchedSaleRecords, matchedRecords) = findMatches(buyerRecords, sellerRecords)
unMatchedBuyerRecords  // Set(HouseBuyerRecord(4,105)): scala.collection.immutable.Set
unMatchedSaleRecords  // Set(HouseSaleRecord(6,House(4),190)): scala.collection.immutable.Set
matchedRecords // Map(HouseSaleRecord(5,House(5),100) -> HouseBuyerRecord(1,100), HouseSaleRecord(3,House(15),102) -> HouseBuyerRecord(2,105)): scala.collection.immutable.HashMap

From looking at the markets code, after matching buyers and sellers, the completeTransactions method is called which handles the transfers of money and houses. There is a lot going on in this method! Lots of tight coupling between households, financial sector, and markets. Yikes!

@adrian-carro
Copy link
Member

@davidrpugh I hadn't really thought of replacing the market mechanism, but rather just solving a couple of details, such as the maximum number of rounds (which I think can be removed) or the concept of enoughBids (which could be better justified or removed). I can't find any other note in the code pointing at problems with the current algorithm. Also, I am not scala-literate at all, but it seems to me that the algorithm you're proposing is pretty much the same as the current one. Is what you are proposing just a refactoring of the same algorithm or replacing the current algorithm by a new one? In case the former is true, I think I'd rather keep the current format, unless changes are minor. In case the latter is true, then I'd still suggest to keep the current algorithm there, but I'm obviously not opposed to a second algorithm being there too as a choice for the user.

@davidrpugh
Copy link
Member Author

@adrian-carro I think the spirit of the current algorithm is fine (i.e., buyer tries to purchase the highest quality house that it can afford; initial matches are deferred until seller to extract a bit more surplus from multiple prospective buyers).

However it is not at all obvious to me that the current implementation is correct. The HousingMarket.java class, for example, has a number of TODO notes that suggest a lack of confidence in what the code is actually doing. Example:

if(config.BIDUP > 1.0) {
                    // TODO: All this enough bids mechanism is not explained! The 10000/N factor, the 0.5 added, and the
                    // TODO: topping of the function at 4 are not declared in the paper. Remove or explain!
                    enoughBids = Math.min(4, (int)(0.5 + nBids*10000.0/config.TARGET_POPULATION));
                    // TODO: Also, the role of MONTHS_UNDER_OFFER is not explained or declared!
                    pSuccessfulBid = Math.exp(-enoughBids*config.derivedParams.MONTHS_UNDER_OFFER);
                    geomDist = new GeometricDistribution(prng, pSuccessfulBid);
                    salePrice = offer.getPrice()*Math.pow(config.BIDUP, geomDist.sample());
                } else {
                    salePrice = offer.getPrice();                    
                }

I have tried to follow through the code but so much work is being done by side effects that it is impossible.

What I am proposing to do is to implement an algorithm that is in the same sprit as the current algorithm, but that is much simpler to understand (and more importantly test).

@adrian-carro
Copy link
Member

@davidrpugh Regarding the comments in the code that you point out, please, let me just clarify that they're not intended to show lack of confidence on how the code works but, rather, a misalignment between the code and the paper that needs to be corrected (probably in the paper). In any case, I think I understand now what you're proposing, and yes, it seems reasonable, as far as the current algorithm is also kept for now, while I take some time to check compatibility with previous results. Does this make sense for you?

@davidrpugh
Copy link
Member Author

@adrian-carro Yes, that makes sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants