We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finish the spatial prisoner's dilemma model in TerraME. It needs to deal with empty cells. The first version is shown below.
--[[ Games on Grids Martin A. Nowak and Karl Sigmund Spatial evolutionary game theory, which was first used to shed light on the emergence of cooperation, has grown rapidly during the past five years and has proved useful in other biological and economic contexts. A wealth of computer simulations of artificial populations now exist showing that cooperation can be stably sustained in societies of simple automata. The introduction of spatial structure has shown that cooperation becomes even more likely if interactions are restricted to neighbors. The venerable rule of cooperating with neighbors is certainly not new. But placed alongside other spatial models, it offers wide perspectives for the emergence and stability of cooperative societies. In particular, it shows that even if the interaction between two individuals is not repeated, cooperation can be sustained in the long run. Spatial Prisioner's Dillema Under what conditions will there be a spatial coexistence of cooperators and defectors, and When will we observe the extinction of either cooperators or defectors? What does the variety and the size of the spatial domains depend on? Will the dynamics always lead to stationary patterns, or will there be also non-stationary patterns in the long run? How does the dynamics changes if a larger local neighborhood is considered in the update rule (i.e. if the second nearest neighbors are included, too)? The following example uses the following rules: Cooperators can survive by forming clusters and thereby outweighing losses against defectors. There is a balance between payoff and survival rates The following example uses the following rules: 1. Each player occupies a single cell 2. Players compete against all 8 neighbors in an 3 x 3 cell 3. The initial configuration has one defector in the center 4. Updates are synchronous 5. Each cell copies the most successful strategy in its neighborhood References Nowak MA & May RM (1992). Evolutionary games and spatial chaos. Nature 359:826?~@~S829 Schweitzer et al. (2002): "Evolution of Cooperation in a Spatial Prisoner?~@~Ys Dilemma" Advances in Complex Systems, vol. 5, no. 2-3, pp. 269-299 Hauert, Ch. & Doebeli, M. (2004) Spatial structure often inhibits the evolution of cooperation in the spatial Snowdrift game. Nature (2004) 428 643-646. ]]-- --[[ 1. Payoffs (T)emptation, (R)eward, (P)unishment, and (S)ucker's payoff T > R > P > S -- prisioner's dillema T > R > S > P -- hawk-dove game -- Nowak and May models 3 x 3 ("moore") neighborhood, agent is a neighbor of itself Prisioner's dillema with R = 1.00 and S = 0.00 -- Schweitzer models 2x2 ("vonneumann") neighborhood, agent is not a neighbor of itself Prisioner's dillema with R = 3.00 and S = 0.00 2. Regions on Novak May models 1.75 < T < 1.8 -- cooperators prevail over a network of defectors 1.80 < T < 2.0 -- spatial chaos if T = 1.85, put one defector in the center cell - spatial chaos 3. Regions on Schweitzer models -- Consider R = 3 and S = 0 -- Vary 3 < T < 6 and 0 < P < 3 -- T > R > P > S (prisioners dillema) -- Region A Coexistence with a majority of cooperators - -- 3 < T < (4 ?~H~R P/3) if 0.00 < P < 0.75 -- 3 < T < (6 ?~H~R 3P) if 0.75 < P < 1.0 -- examples: (T = 3.5, P = 0.5 ) and (T = 3.8, P = 0.6) -- Region B - Coexistence with a minority of cooperators - spatial chaos -- 4 ?~H~R P/3 < T < 4.5?~H~RP if 0.0 < P < 0.75 -- examples: (T = 3.9, P = 0.5) and (T = 4.2, P = 0.2) -- -- limit btw A and B - T = 3.8333 and P = 0.5 -- Region C Coexistence with cooperators in small clusters (region C) -- -- 4 - P/3 < T < 9 - 3P iff 1.5 < P < 1.875 -- 4 - P/3 < T < 6 - P iff 0.75 < P < 1.5 -- 4.5 - P < T < 6 - P iff 0.0 < P < 0.75 -- examples: (T = 4.5, P = 1.0) and (T = 5.6, P = 0.2) -- (T=4.0, P =0.5) is the border between regions B and C. ]]-- payoffs = { nowak_may = { fractal = {temptation = 1.85, reward = 1.00, punishment = 0.01, sucker = 0.00}, coop1 = {temptation = 1.45, reward = 1.00, punishment = 0.01, sucker = 0.00}, coop2 = {temptation = 1.76, reward = 1.00, punishment = 0.01, sucker = 0.00}, chaos = {temptation = 1.90, reward = 1.00, punishment = 0.01, sucker = 0.00} }, schweitzer = { regA = {temptation = 3.50, reward = 3.00, punishment = 0.50, sucker = 0.00}, regB = {temptation = 3.90, reward = 3.00, punishment = 0.50, sucker = 0.00}, regC = {temptation = 4.50, reward = 3.00, punishment = 1.00, sucker = 0.00}, borderAB = {temptation = 3.83, reward = 3.00, punishment = 0.50, sucker = 0.00}, borderBC = {temptation = 4.00, reward = 3.00, punishment = 0.50, sucker = 0.00} } } percent_cooperators = { nowak_may = {fractal = 1.00, coop1 = 0.90, coop2 = 0.90, chaos = 0.90}, schweitzer = {regA = 0.50, regB = 0.75, regC = 0.90, borderAB = 0.50, borderBC = 0.50} } local probCooperate = 0 -- create a table with the available strategies in order to -- allow the user to select one from the graphical interface strategies = {} idxstrategies = {} forEachOrderedElement(percent_cooperators, function(idx, mtable) forEachOrderedElement(mtable, function(midx) local value = idx.."_"..midx table.insert(strategies, value) idxstrategies[value] = {author = idx, case = midx} end) end) -- neighbor configuration neigh = { nowak_may = {strategy = "moore", self = true, wrap = true}, schweitzer = {strategy = "vonneumann", self = false, wrap = true} } -- result of a game between two agents local game = {} game.Cooperate = {} game.Defect = {} -- state of an agent comparing previous and current move state = {} state.Cooperate = {} state.Defect = {} state.Cooperate.Cooperate = "Cooperate" state.Cooperate.Defect = "CooperateToDefect" state.Defect.Cooperate = "DefectToCooperate" state.Defect.Defect = "Defect" local function playGame(agent1, agent2) return game[agent1.strategy][agent2.strategy] end local agent = Agent{ playWithNeighbors = function(agent) local cell = agent:getCell() agent.payoff = 0 forEachNeighbor(cell, function(cell, neigh) agent.payoff = agent.payoff + playGame(agent, neigh:getAgent()) end) end, findBestStrategy = function(agent) local cell = agent:getCell() agent.beststrategy = agent.strategy local bestpayoff = agent.payoff forEachNeighbor(cell, function(cell, neigh) other = neigh:getAgent() if other.payoff > bestpayoff then bestpayoff = other.payoff agent.beststrategy = other.strategy end end) end, changeStrategy = function(agent) agent.state = state[agent.strategy][agent.beststrategy] agent.strategy = agent.beststrategy end, init = function(self) if math.random() <= probCooperate then self.strategy = "Cooperate" else self.strategy = "Defect" end self.state = self.strategy self.beststrategy = self.strategy end } function defectorCentre(model) -- find the central cell local mid = (model.dim - 1) / 2 local cell = model.cells:get(mid, mid) -- get the agent at the central cell local ag = cell:getAgent() ag.strategy = "Defect" ag.state = "Defect" end SPD = Model{ dim = Choice{min = 10, default = 41}, agents = Choice{min = 100, default = 1681}, finalTime = Choice{min = 50, default = 100}, strategy = Choice(strategies), init = function(model) verify(model.agents <= model.dim * model.dim, "There should be enough space for all agents.") local author = idxstrategies[model.strategy].author local case = idxstrategies[model.strategy].case game.Cooperate.Cooperate = payoffs[author][case].reward game.Cooperate.Defect = payoffs[author][case].sucker game.Defect.Cooperate = payoffs[author][case].temptation game.Defect.Defect = payoffs[author][case].punishment probCooperate = percent_cooperators[author][case] model.cell = Cell{ color = function(cell) local agent = cell:getAgent() if agent then return agent.state else return "Empty" end end } model.cells = CellularSpace{ xdim = model.dim, ydim = model.dim, instance = model.cell } model.cells:createNeighborhood(neigh[author]) model.society = Society{ quantity = model.agents, instance = agent, percentCooperators = function(self) local num_coop = 0 forEachAgent(self, function(agent) if agent.strategy == "Cooperate" then num_coop = num_coop + 1 end end) return (100 * num_coop) / #self end } model.env = Environment{model.cells, model.society} model.env:createPlacement{ strategy = "uniform" } if author == "nowak_may" and case == "fractal" then defectorCentre(model) end model.timer = Timer{ Event{action = function() model.society:playWithNeighbors() model.society:findBestStrategy() model.society:changeStrategy() model.society:notify() model.cells:notify() end} } Map{ target = model.cells, select = "color", color = "RdBu", value = {"Defect", "CooperateToDefect", "Empty", "DefectToCooperate", "Cooperate"} } Chart{ target = model.society, select = "percentCooperators" } model.society:notify() end } SPD:configure()
The text was updated successfully, but these errors were encountered:
pedro-andrade-inpe
No branches or pull requests
Finish the spatial prisoner's dilemma model in TerraME. It needs to deal with empty cells. The first version is shown below.
The text was updated successfully, but these errors were encountered: