Skip to main content

Validating distance decay through agent based modeling

Abstract

The objectives of this research are to display the utility of using agent based model and simulated experiments in understanding criminal behavior. In particular, this research focuses upon the distance decay function that has wide applicability in understanding ways in which offenders move about their awareness space and select their targets for committing crime. The basis for distance decay is an assumption that the offender apprehends recognition by his neighbors and so tends to commit his crime a little away but not too far from his home location. But this is an untested assumption and based upon another assumption that recognition comes from frequent interactions. There is no simple way to test these assumptions in real life. This paper argues that simulated experiments using agent based modeling are appropriate methods for difficult to test criminological concepts. In this research, two types of agents are created- one representing the offender and the other- the victim. They are assigned specific characteristics that control their action such as moving in a neighborhood, making rational choice to maximize their gain while minimizing the risk of apprehension from interaction with other residents of the neighborhood. The simulation result displays that beginning with these small principles the final model emerges as a pattern of target selection similar to the distance decay function. The importance of this technique lies in the fact that such experiments provide the means to apply agent based modeling to validate a variety of criminological concepts. While the technique has limitations of validation it can help in understanding the behavior of offenders as they commit their crimes individually as well as in groups.

Introduction

This paper aims to develop an agent based model to test the well known concept of ‘Distance Decay’ [1] that forms the basis for understanding the movement pattern and target selection of offenders and for designing a variety of crime control measures. The spatial movement patterns of motivated offenders are important to study in order to understand how offenders select their targets. Indeed, spatial research is important for many criminological perspectives such as geographic profiling [2] and to understand criminal behavior and resulting crime patterns [3]. Criminological research suggests that offenders tend to travel short distances to commit their crimes. The number of crimes decreases exponentially as the distance from their home or base location increases [4]. The rational choice perspective also argues that an offender will choose a target at shorter distance than a longer one. Furthermore, routine activity theory [5] suggests that criminal events are likely to occur around the regular paths taken by the offenders, which invariably will lie close to their home location, as these will be more common for their activities. It makes sense to believe that criminals will minimize their time and effort to commit the crime by selecting targets closer to their usual places of residence or places familiar to them.

The distance decay concept developed from the geometry of crime [1] has argued that motivated offenders select their targets within their awareness space which has been described as the places offenders know as they go about daily life [6]. Moreover, this awareness space is formed around the home, work place and areas of leisure and recreational activities of the offender. Spending more time at a place increases familiarity and suggests opportunities to the offender. Thus, within this region of awareness space lie the ‘activity space’ where offenders commit their crimes. On the other hand, another implication of this decay concept is that offenders apprehend recognition in their local neighborhood and would therefore not commit crimes in areas where they are likely to be known. This region of no crime around the home location has been dubbed as an illustration of ‘buffered distance decay function’ [7] implying that predatory offenders avoid committing crimes in immediate vicinity of their homes. While such buffers are absent for spontaneous or crimes of passion [8] crimes that are pre-meditated will involve such a buffer zone [9].

Offenders travel long distance only when they have a specific target in mind that involves planning and careful consideration. The distance between a drug dealer’s home and place of transaction was seen to be less than a mile [10]. Research suggests that property crimes involve traveling greater distance [11] than crimes of predatory violence [12]. It has also been reported that most offenders commit a large number of their offenses a short distance from their residences and as the distance increases the number of offenses decreases [13]. Further, it has also been pointed out that even if the costs in time, money and energy to overcome distance reduces the probability of committing crimes with increasing distance for an individual at an aggregated level, the distance decay results hold [14]. A study of robbery found that offenders travelled further if they performed more professional robberies [15]. Robbers combined effort minimization and opportunity maximization, and that they did not travel far unless there was an incentive (usually monetary) to do so. Another study from Finland supports the distance decay model for crimes of homicide and rape [16].

Apparently, the distance to travel is determined under the condition of cost benefit rational consideration and if targets are available near by then it makes little sense to go far. However, all such research examining the spatial pattern of crime sites and inferring the distance traveled by the offender are based upon a number of assumptions. For instance, inherent in the notion of distance decay function are several conjectures about ways in which the offender forms his perceptions and engages with people around him. A major assumption is that the offender will not commit a crime in the area where he fears of being recognized by the residents [1]. The observation of buffer zone around the home location for instance would support this contention since the neighbors are likely to recognize the offender. But anyone moving into a new neighborhood soon realizes that it takes time to build acquaintance with the local residents. In western societies where privacy is a major factor in social interactions it is a common experience that familiarity and friendship with the neighbors takes time. Residents slowly become familiar with one another for interaction amongst residents take time. But this presumption in turn is based on another belief that living in an area for certain period of time invariably leads to interactions with the neighbors and local residents, who would come to know the person. This assumption implies that those who would see the offender frequently are likely to recognize him. That is, the more interactions take place between the offender and the residents the greater is the possibility of being recognized.

Similar assumption governs the concept of awareness space, which comes from becoming familiar with an area. Here we may apply the routine activities approach to argue that daily activities take him to or through specific areas and where he spends large proportion of his time. Accordingly, the more time he spends in an area the greater is going to be his awareness of its layout, resident population, vulnerabilities, rhythm of activities and attractive targets. This implies greater awareness of an area will enable the offender to find out vulnerable targets that can easily be attacked. Yet, offenders have been described in various ways based upon their ‘hunting’ patterns [7]. The ‘troller’ involved in other non-predatory activities commits an offense based on opportunity. On the other hand ‘trapper’ creates a position that allows him to encounter victims in situations, which are under his control. Finally, there is a limit to the distance that the offender will travel as part of his routine activities and to seek out targets to commit his crimes. This implies an assumption of cost benefit analysis based upon a rational calculation of effort and benefit on part of the offender.

Therefore, based upon the above discussion the set of assumptions governing the phenomenon of distance decay are the following:

  • Frequent interactions with residents implies they begin to recognize the offender

  • Fear of being recognized will desist the offender from committing the crime

  • Living and working for long periods increases familiarity with an area

  • Familiarity is the basis of awareness space of the offender

  • Predatory crimes are committed within the awareness space of the offender

  • Cost benefit considerations will restrict the offender from traveling long distance to commit the crime

To this we can add one more factor, which is that successful commission of a crime will encourage the offender to commit more crimes.

It is difficult to test the validity of these assumptions in a real life situation. In criminology there are many independent variables, such as offender cognition and adaptability that cannot be manipulated, for physical, practical, or moral reasons [17]. Thus, one cannot send a motivated offender to a specific location and observe how he learns to commit his crimes and if the selection of targets reflects distance decay. Even when some information about the home location of few offenders and the burglaries that they have committed is available [3] it is impossible to judge if the above-mentioned assumptions hold true. For instance, it was seen that offenders have not committed any crime in the immediate vicinity of their home [3]. Yet, the reason for this pattern could only be speculated that this is due to the fear of being recognized. We cannot test its validity from observing the distances amongst different crime sites. One method of testing this assumption is to directly ask a large sample of offenders why their selection of targets follows a distance decay function and statistically test their responses. But how an offender begin to develop his awareness space and make his judgment about recognition by local residents is difficult to know from survey questionnaire. An offender himself may not be able to explain how his learning takes place. If at all it can be done then it has to be judged by observing the behavior of a large number of offenders over a period of time. Clearly, this is impossible in reality and has to be assessed by some other methods.

Furthermore, criminological theories themselves are relatively poor in explaining the crime phenomenon and its control mechanism due to the limitations of its data and inability to experiment with variety of variables [18]. The limits imposed by theory, data and experimentation makes it difficult to work from theory to experiment, or empirical description to theory [17]. Criminological explanations tend to be stated in broad terms that are difficult to test empirically, perhaps a reason why social sciences have difficulty making headway [19]. The problem is not limited to criminology for even in physical sciences the inability to explain uncertainties in weather patterns, the growth and effect of modern technology and communication networks, the adaptive nature of living organisms and many other complex systems from seemingly large collections of simpler components is formidable [20].

Computational criminology

A recent development in the realm of criminology has been the applications of computers and mathematical modeling to test a variety of scenarios that are difficult to judge in real life. Crime is a multidimensional, complex, and dynamic activity. In order to understand its nature one has to comprehend not only its spatio-temporal dimensions, but also the nature of crime, the victim-offender relationship; role of guardians and history of similar incidents. Crime and its control analysis involve massive computing challenges due to the large volume of data and complexity of the human behavior. For example, a set of serious crimes for a period of 6 months in Indianapolis metropolitan area amounts to 30,000 plus data points. Rationalizing police beats based on this kind of sample crime data along with physical and resource constraints is a gigantic data analysis task. This cannot be done except by applying latest computer simulation techniques and clustering algorithms to achieve customized patrol beats for equitable workload [21]. Criminological problems like crime pattern analysis, target selection by motivated offenders, awareness space of serial criminals, offender profiling, movement patterns of victims and offenders that lead to hot spots are some areas where expertise from criminal justice, mathematics, data mining, visualization, geographic information systems, distributed computing together with applications of complex algorithms and computer simulations are required.

Computational Criminology is emerging as a new inter-disciplinary field that applies computer science and mathematical methods to the study of criminological problems [22]. The complexity of human behavior, social interactions and law and society parameters present extraordinary challenges to model criminal behavior and determine the best possible means to control it. Computational Criminology is guided by the notion that crime is a rational act in which the offender weighs the risks and rewards to shape his or her behavior. Utilizing the concepts derived from Environmental Criminology [1] and Routine Activity Approach [5] growing research has focused upon ways in which individuals with motivations for criminal behavior live and move within their awareness spaces, form networks of friends and seek opportunities for crimes. The spatio-temporal dynamics of these individuals determine how they learn, encounter and sometimes exploit situations for their criminal acts. This field is bringing promising innovative techniques of analyzing criminal behavior and exploring solutions to deal with them. Computational Criminology has found applications in modeling burglaries [23], in counter-terrorism planning [24], for analyzing criminal justice system [25], to explore drug market dynamics [26] and to model street robbery [27].

Computer modeling and simulations helps to capture the complexity and diversity of human behavior in a robust and systematic way. These models can re-create and predict the appearance of complex phenomena based on the simulation of the simultaneous operations and interactions of multiple agents. Simulation and data collection can work together to advance scientific understanding [20]. Simulation provides the means whereby various characteristics of the agents, society, and the landscape can be held constant or systematically varied that are impossible using traditional social science methods [28]. These modeling techniques work on the principle whereby a computer evaluates the model numerically and produces data in order to estimate the characteristics of the model [23]. Simulated data comes from a rigorously specified set of rules rather than direct measurement of the real world [29] and this provides flexibility to the analyst to experiment with a variety of social settings. The process is one of emergences from the lower (micro) level of systems to a higher (macro) level. As such, a key notion is that simple behavioral rules generate complex behavior [24]. It is also suggested [30] that simulations function like a thought process and where ‘what if’ type questions under specific conditions can be examined and impact of chance can be assessed. Simulations are now becoming valuable for their capability of conducting ‘virtual program evaluation’ in criminology [31]. Furthermore, simulation models are able to make dynamic decisions based on changing information [32].

While the computer program could accommodate a variety of rules to simulate the process the use of simple models is suggested on grounds that these provide greater insight into the dynamics [23]. It is also noted that with simple models the subtle effects of its hypothesized mechanisms are easier to understand or discover and that the complexity should be found in the results, not in the assumptions of the model [29]. If the goal of a simulation is to attain a greater degree of understanding of some fundamental process, then it is the simplicity of the assumptions, which is important, not the accuracy of the surrounding environment [29].

Agent based modeling

A particular technique within the realm of Computational Criminology suggests that such learning scenarios can be tested through agent based modeling. In these models an independent agent is created that has the ability to take decisions based upon specific inputs and to interact with the environment and other agents. An agent can represent an individual, group, an entity or an organization. The simulation sets some rules for iteration and each step is governed by some probability that introduces variations in the system. The agent assesses the current situation and takes a decision about the next step based on the assigned probability. This mechanism incorporates a realistic human like behavior on part of the agent [33]. Thus agent based modeling provides the means to experiment with a variety of situational factors that guide human behavior. This in turn helps to model human action and judge the impact of the environment on decisions made by a human being. Agent based models can be used to create systems which mimic real scenarios and produce a dynamic history of the system under investigation [34]. In particular, such modeling assists in experimenting with social situations that human beings confront on a daily basis that would otherwise be impossible to carry out.

Four important characteristics of the agent have been identified [23]. The agent has autonomy to make decisions independently without being guided by some external source once the initial conditions are set and the system is activated. Agents can be heterogeneous and possess different characteristics. Thus, one agent can be an offender and the other a victim. Agents are reactive and have the capability to respond to the environmental cues and modify their action. Thus, an offender will change his path if the residents are suspicious of his activity. Finally, the agents are programmed with bounded rationality by limiting their perception of their environment so that choices are not always perfectly optimal [23]. This ensures that the agent is not acting as a rational agent, which is similar to the condition of human frailty that is influenced by desires and temptations and makes them take risks. Feedback can be incorporated in agent-based models by allowing agents to change how they apply rules, based on experience [31]. Agent-based modeling in crime related situations has been applied to experiment with the effects of collective mis-belief in agent societies and illustrate how mis-beliefs can spread [35]; to outline a model that can be used to investigate civil violence [36] and to model burglary in an urban environment [23]. Computational Criminology is showing signs of gaining momentum [37].

Computational format for agent based modeling

We have prototyped a simulation tool using the techniques of agent based modeling and describe below the steps in plain terms. This simulation mimics the learning process of a motivated offender to search for suitable targets around his home location. This model is to validate one of the assumptions defined in the previous section where an agent built on machine learning concepts is trained to commit break-ins around the neighborhood. In particular, we focus upon the proposition two outlined by Brantingham and Brantingham [1] model which tests the distance decay from the home base. We see this as the first model in a series of possible models that reflect the evolving complexity of human movement in daily activities. We programmed the agent to follow a machine-learning algorithm that helps it ‘learn’ from environmental cues and modify its actions accordingly. The following describe the steps and logical structure of the agent modeling process:

A grid structure is created to represent the city landscape. Each cell represents a ‘house’ and all the grid rows and columns are the ‘streets’. A motivated criminal agent [henceforth called c-agent] is designed to move randomly in this grid structure from one starting cell. For this model we focus upon the movement of the agent and align the grid so that the starting cell [home] is at the center of the grid. A characteristic assigned to the offender is ‘cost consideration’ based on the well known rational choice perspective. The c-agent has to expend effort to move around and select targets for break-ins to make a gain. The c-agent is given a base equity and assigned one ‘mark’ as the cost of moving the distance of one cell. The movement involves a ‘loss’ of equity based on the distance he has travelled at the rate of one unit for every cell crossed in the path. The c-agent starts from home location and randomly chooses one of the four neighboring cells to cross, as the c-agent is designed to move only on the horizontal and vertical axis. This random selection of direction is done at every stage by letting the simulation program arbitrarily choose a number from 1–4. Thus, the c-agent moves across the grid in indiscriminate manner and after traveling some distance turns back to return home.

For the return path the c-agent chooses the shortest path to return home so that the return journey ‘cost’ is minimal. The program is designed to ensure that the agent has the map of the grid structure and knows how to get back ‘home’ from any location. Depending upon the distance travelled [the number of cells traversed] the c-agent is deducted portion of ‘marks’ from the equity. The program is designed to ensure that the c-agent ‘turns back’ after expending a proportion of the equity to ensure that he is able to reach home without losing all the money. This is necessary to ensure that the c-agent does not ‘decide’ to continue traveling after committing the crime. A condition is required to ensure that the c-agent will not go on a crime spree, bloating his equity and continuing to add to it! The program makes the c-agent turn back immediately after break-in of a cell and puts a maximum amount that can be spent in traveling. This helps to trigger the turning back and by various experimentation we set this number to be 10% of the initial equity. Based on these parameters the computer simulates the c-agent to move around the home location and to minimize the costs. The movement pattern is a ‘circle’ around his home location, which is the smallest area to keep the costs down.

We ‘experimented’ with the behavior of the c-agent by varying these conditions one by one. If the c-agent was freely allowed to roam in a random manner without turning back the movement covered all the cells. Once a condition of ‘cost’ was introduced the c-agent learned to concentrate upon neighboring cells and spend not more than 90% of the base equity in his movement before turning back home. Even though the c-agent was programmed to randomly select any of the four cells [representing the four directions] surrounding his location at every step, the final pattern emerged as a circle around the home cell. See Figure 1 below:

Figure 1
figure 1

Movement Pattern of c-agent color code represent frequency of movement.

For the next step, in all the other cells we situated stationary agents who serve as residents [and victims] of that ‘neighborhood’; we designate these as v-agent. Each cell is populated and valuable goods are provided in each home that is attractive to the c-agent. The value of goods is kept uniform across the grid to keep the program simple. The program makes the c-agent ‘break-in’ a cell and the value pertaining to that cell is transferred to the equity of the c-agent. The selection of cell to break-in is also done randomly after leaving the home cell. That is, the c-agent moves to a neighboring cell and the program gives an option of breaking into the cell. If the c-agent decides to break in the program stipulates the c-agent to turn back and go home after adding the goods’ value to his equity. The program estimates the equity after reaching home which then becomes the initial equity to start the process again. In this experimentation we observed that the c-agent targeted the nearest cells to his home location. This was expected as it helped in keeping the ‘cost’ of travel low and adding value to the base equity.

Distance decay assumes that local residents know the offender and hence he will not commit the crime around his home for fear of being recognized. As mentioned above, it is unclear how one person gets to know another person. Clearly, mutual interaction is the starting point of recognition and perhaps frequent interactions cement the process of becoming acquainted. This simulation uses the memory function to remember a resident and the frequency and time elapsed in such encounters. It works on the logic that if a person is encountered a number of times the chances of remembering each other is much more. At the same time, if the person is met once and not crossed within the specified amount of time, there is a high chance of forgetting. The simulation takes these aspects into account and learns the neighborhood locations and its residents.

We use this concept in teaching the computer to set a system where by an agent gets to recognize another agent when in proximity. We instruct the computer to make the c-agent ‘aware’ of the resident [v-agent] whose ‘house’ he is crossing while traveling. We also instruct the c-agent to become aware of the identity of four other residents surrounding the cell that is being crossed [at the boundary of the grid this is suitably accounted]. In order to simplify the program we do not permit the c-agent to travel angularly and move only on the Manhattan path.

The geographical coordinates of the cells mark the identity of the v-agents. Furthermore, a time stamp is assigned to this identity when the c-agent crosses the cell. As time passes, the probability with which the c-agent can recall the identity of the v-agent is made to decrease. This is set up by letting the c-agent toss a weighted coin to decide whether he can remember this v-agent where the weights correspond to the time period for every v-agent. This probability is assigned by a standard procedure in computer sciences- to determine an action based on probability a procedure similar to tossing a weighted coin is followed. For example, if the probability is given as a fractional number between 0–1, then the computer generates a random real number from 0 to 1. If this random number is greater than the assigned probability, the agent will do one action, like going back home and if not, the other action of roaming further. Again, some experimentation was done to fine tune the program and ensure that the c-agent does not ‘jump’ widely across the grid. The learning takes place by letting the c-agent build a data file of long and short-term memory of interactions with the house occupants in the space he is moving. The decision to commit crime comes from his memory- if he recognizes a house occupant he will not commit the crime. If the recognition is 'fuzzy' [the house occupant is in his short term memory] he will toss a weighted coin and take a chance of committing the crime. The c-agent ‘gambles’ on the house occupant not recognizing him for the recognition is mutual- the c-agent and v-agents recognize each other only through interaction when a particular cell is crossed.

For instance, if the c-agent is crossing location L1 he will recognize resident v-agent L1 living there. We then let the c-agent go to location L2 at time T0. If asked at time T0+n, the probability that c-agent will recognize the resident v-agent L1 is 1-f(n) where the function ‘f’ increases with respect to n. An example of such a function is f(n) = 0.1*n. So, after 1 step, the c-agent will be identified 90% of the time, but after 6 steps, only 40% of the time. We had to experiment with such functions for with this particular function, the memory is active for only 10 steps.

Further, the above memory has no way to "get to know" a person. That is we need something that says if you repeatedly see a person, you will remember him for a long time after you saw him - much longer than you will remember someone whom you have not seen repeatedly. Our requirement then is to implement a (linearly) decaying memory, i.e., if a v-agent does not see the c-agent for a while, the v-agent forgets the criminal. However, we had to incorporate familiarity (for instance, acquaintances will remember a person temporarily even when they have not been seen in a while). To capture this, we set the memory decay rate to slow down at each encounter. We remedy it by altering the above function as f(n, x) = x*n

As before, we start with x = 0.1

This will mean that memory is active for only 10 steps.

If within the 10 steps, the c-agent interacts with the v-agent we change x to x*0.9

This will ensure that the criminal will be remembered longer than 10 steps. This is a linear function, but it can be modified to any function as desired. A number of experiments were conducted with varying functions to finalize the program.

Moreover, recognition is mutual. Thus, the resident v-agent will remember the c-agent for a certain period of time once his cell is crossed. We set the learning by the simple idea of symmetry. The c-agent and v-agents (residents) are symmetric - i.e. the c-agent is seen by a v-agent if and only if the c-agent can see the v-agent, and this happens every time a cell is crossed by the c-agent or if the c-agent is in the neighboring cell following on the X or the Y-axis.

Some other details had to be accounted also. When a memory decayed fully, the decay rate will also be reset. There were two cases with no memory: when a v-agent has never seen the criminal [c-agent], and when a v-agent has seen the c-agent so long ago that the memory has decayed. We tried considering this to be different, in the sense that a forgotten, but refreshed memory has a slower decay than a new memory. However, in our experimentation we found that this difference was not significant for the outcome. The computer was set so that the criminal’s memory of the v-agent was the same as the v-agent’s memory of the c-agent.

Using this, at each grid point, the c-agent knows the probability with which he will be recognized [and hence caught]. Based on this probability, he decides whether or not to break-in there. At every stage the computer calculates the probability by the toss of a coin to determine if the c-agent should commit the crime or leave that cell alone.

The system monitors the interaction of the c-agent with these other v-agents each time a cell is crossed in random movement. That is, when the c-agent moves from point ‘LA’ to point ‘LB’ on the grid all the intervening cell residents get to see the c-agent and vice versa. This serves the objective of ‘recognition’ of the criminal agent by the particular resident whose house is crossed during the movement. The resident v-agent observes the c-agent crossing his property and keeps this recognition in a short-term memory. If the criminal c-agent returns from a different path from point ‘B’ to his home location then again all intervening cells that are crossed in this path also get to recognize the c-agent.

Now, each resident v-agent is given a ‘memory’ that is short term and long term in its design. As the c-agent crosses a cell the v-agent recognizes him for a short duration according to the in built clock of the system. If the c-agent interacts frequently with a particular resident [v-agent] by crossing through his cell repeatedly the recognition becomes embedded into long term memory. Clearly, the ‘neighbors’ of the c-agent will be those who will have him in their long-term memory.

Next, the c-agent is assigned the characteristic to ‘break into homes’ at infrequent time periods when moving randomly. In the present situation, all homes have the same attraction value so that opportunities for break-in are uniformly distributed. With this characteristic the c-agent commits break-ins randomly in the region around his home in various movement paths based upon the distance and recognition of the v-agent. This gives a scatter plot of random commission of break-ins around the home location of the offender. Each of the break-ins provides the ‘profit’ for the offender for committing the crime. The simulation displays the cells that are targeted more frequently in a color code.

Each v-agent is also given the characteristics of ‘catching’ the offender [c-agent] if the recognition is reached to the level of long-term memory and if the c-agent tries to ‘break-in’. This is achieved by reducing the probability of successful break in if the probability of recognition was high. The simulation suggests that the neighboring residents catch the c-agent more often close by around his home location. Once ‘caught’ the c-agent loses a certain percentage of the equity and the program starts again. We experimented with the size of the grid to observe this phenomenon and decided to enlarge the size to a 200×200 cell grid for a meaningful pattern to emerge. The numbers of simulations were also enlarged to 500 with each ranging from 1000 to 2000 steps.

We combine all the characteristics to set the following scenario: the c-agent moves randomly around his home but does not travel too far due to the costs of travelling. Every time he crosses a cell he is recognized by the resident and the more often this happens the resident begins to recognize the c-agent. The cells, representing houses provide uniform opportunities and hence a ‘profit’ for the c-agent, which serves as the incentive to commit the crime. The c-agent ‘gains marks’ for every successful break-in and ‘loses marks’ by travelling [one mark per cell], or if he gets caught through recognition. If the resident v-agent has the c-agent in his long-term memory then the resident catches the c-agent. The c-agent will then lose a certain percentage of his equity. Every time the c-agent is caught or the base equity drops below a threshold the program resets and starts the process again. This simulation is run a large number of times and the resulting plot of distance traveled around the home location is shown below Figure 2:

Figure 2
figure 2

Selection targets on the grid color code represents targets hit by the c-agent.

The scatter displays that the c-agent has to travel far to maintain the gain for ‘local’ residents are going to catch him. The color code indicates the frequency of the c-agent hitting the target. Significantly, there is a small buffer zone where no crimes are committed. The blue region represents the space where the c-agent does not commit any crimes. The red and yellow regions display the regions where the crimes are committed frequently. The light blue represents the region where crimes are committed but infrequently.

The frequencies of committing a crime and the distance from the home location are then plotted in a cumulative frequency plot shown below Figure 3:

Figure 3
figure 3

Cumulative frequency distribution of targets versus distance.

The resulting plot indicates the distribution as suggested by the distance decay function. The shape resembles the one theorized by Brantingham and Brantingham [1] though it is jagged and has a small buffer zone where the probability of committing the crime is negligible. The simulation suggests that given the above conditions the c-agent will not travel far to commit the crime and will avoid places in close proximity to his home. We believe a very large number of ‘runs’ could smoothen the curve and the peaks and sudden drops would coalesce. We consider the overall shape of the cumulative frequency curve and not address the jagged peaks and valleys that are embedded in it. A large number of crimes are shown to be committed since all houses are equally attractive to the c-agent. In practice, houses will have varying goods to steal and attractiveness to break-in. This will lower the frequency of break-ins and which will vary across the space around the home of the c-agent.

Discussion

The designed program found remarkable coherence with the ‘distance decay’ concept. The c-agent would commit the break-ins much more frequently near his home location and would not travel far to seek the targets. Even when greater opportunities were provided by way of expanding the grid and ensuring that every cell provided ‘valuable’ goods, the agent did not seem ‘interested’ in venturing far from the home location. Despite the increase in probability of being recognized through greater interaction, the probability of ‘gain’ would work out more than the ‘loss’ even though the value of goods was kept marginal. We also experimented with some modifications to the wandering of the criminal. The program was modified so that at each grid point, the c-agent was given the option of deciding to go home. This result was only to make him circle around his home more time but seek targets in close proximity.

The second modification was to let the agent decide how long the trip would be before leaving home. This helped in learning where he is caught and where he can commit the crime to gain a profit. But overall, the selection of the targets did not change much. We made one more modification to set a real life situation by introducing a probability option at every home. The c-agent was not to know if the cell is occupied and would have to take a chance to break in. The probability of occupancy was kept uniform for every cell. The results suggested that final plot still resembled the one shown in Figure 2 and supported the distance decay function.

Nevertheless, this result needs to be interpreted with caution. Simulation is still a nascent methodology in criminology and its applicability is not widely practiced. There are several questions regarding the validity of the simulated model and its implications for criminological theories. Three criteria for ‘validating’ computer models have been proposed [38]. The qualitative credibility is established if the model is consistent with what is expected of the result. The internal quantitative credibility is met when the model output corresponds to observations that are a part of the data used to develop and calibrate the model. Furthermore, external quantitative credibility corresponds to the situation when the model output corresponds well to observations from data not used to develop and calibrate the model [38]. It needs to be kept in mind that in social sciences good and even valid evaluation data is difficult to obtain. Indeed, in criminology not only is the data difficult to obtain but also by its very nature is likely to be deliberately misleading [31]. The offender is unlikely to provide full information about his criminal behavior and even the police data is likely to be governed by organizational policies. Accordingly, for a system based upon agent based modeling it seems ‘qualitative credibility’ is perhaps the main basis for accepting these results. The distance decay concept corresponds to a good extent to the theoretical model suggested by Brantingham and Brantingham [1] and its replication is a point of validation of the model.

As discussed above, there are no data sets that can be used to test the efficacy of the simulated model. The police do not record exact movement of offenders in reaching their targets. Furthermore, ways in which offenders search for their targets are perhaps unknown to them too. The offender builds his awareness space by frequent movement and interaction in a given area. But the selection of targets could be based upon some additional intelligence and observation rather than exclusively on the condition of ‘recognition’. Even in cases where the police are able to apprehend the offender and obtain a full confession this information is not recorded properly and in any case such examples are too little and far between to develop into a useful data base. Agent based models do not represent an empirical test of the theory but rather the extent to which the theory is plausible [7]. Thus, validation of the model has to be accepted largely on the basis of qualitative credibility [38].

Another major limitation is that the selection of the target is unlikely to be guided only by the two conditions of cost and fear of recognition. Crime pattern theory [39] and the extant research suggest offender movement is structured by neighborhood characteristics, the target backcloth, and influenced by other locations in the offender's activity space, such as workplace or past residences [40, 41]. However, it seems that these two conditions may be playing a significant role for these by themselves do replicate the distance decay function.

Moreover, we acknowledge that this is a simplistic model that begins with the first condition suggested by Brantingham and Brantingham [1]. The agent begins as if there is no awareness space and the movement is solely guided by geography and the conditions stipulated in the model. In reality, the offender commits crimes in the awareness space as suggested by the fully developed Brantingham and Brantingham model [1]. But our model is able to show how this awareness space is itself developed by the random movements. Every time the agent ventures out, explores the neighborhood and returns back, the information about the paths, c-agents and ‘gains’ made in the break-ins is registered in the memory. This is the process by which the agent builds his awareness space and which mimics the development of awareness space of human beings too. The agent learns to maximize the gain by experimenting with various paths and break-ins. This by itself an important contribution for it provides the means to test many other variations of the learning process.

Indeed, this is the significant contribution of the paper that it demonstrates a methodology whereby, simple assumptions guide the behavior of agents and this in turn helps explore and understand the complex phenomenon of crime. Agent based modeling has the further advantage of not being completely deterministic in its formulation. The system of ‘gain and loss’ induces a feedback into the action of the agent. The consequence is that the agent ‘learns’ to find suitable targets that increase his gain and minimize the loss. It is this ‘learning’ that corresponds to real life learning where reward and punishment operate to shape the behavior of the motivated offender.

Future work

We are now developing the simulation to add more features to the program. In the future we intend to broaden the assumptions and apply more characteristics to the c-agent and the v-agents. The grid would have one-way streets, blocked exits and non-uniform distribution of homes to target. The next version of the simulation will go beyond simple grid based structure and incorporate real-world aspects based GIS city maps. Further, to test the theory in a realistic situation we need to make each resident of the region [v-agents] also move randomly. If everyone is moving the c-agent will have the option of breaking into neighboring cells also, even when the resident have him in his long term memory. Furthermore, each cell will be differentially weighted for affluence and security features. This will set the stage for the offender to pick and choose more affluent targets even if he has to travel longer distances. Another feature is to add a characteristic of deciphering if the home is unoccupied. The c-agent will ‘observe’ if newspapers are piled up or there is no vehicle in the garage for every cell and if so found, the probability of committing the crime will increase accordingly. Finally, some police ‘guardians’ will also be made to move randomly and the c-agent will be required to commit the crime only if the police vehicle is at least a minimum distance away. We believe such a program will help understand ways in which motivated offenders construct their awareness space; develop skills in identifying vulnerable targets and finally in selecting them for crime. We have used the system of frequent interactions that leads to recognition. This will be explored further to see how the agent will build friendships with similarly motivated offenders and increase his awareness space and range of operations. The Brantingham and Brantingham model [1] describes a wider range of operations for the motivated offender. The offender first commits crimes around his home location; then adds the surrounding areas of his place of work and recreation and gradually includes the regular paths that he takes in his routine movements from home to work to shopping area and back to home. If the principles that guide this movement and establish the model are as described above- rational choice based upon cost consideration and risk of recognition, added with routine activities that takes him to other parts of the region, then we expect the simulated model to emulate this pattern.

Conclusion

Clearly, agent based modeling provides an interesting mode to experiment with various situations that can closely resemble the reality. This has significant implications for testing different criminological theories by setting up an experiment and simulating the machine to carrying out all the possibilities. The major factor is to let the machine learn by trial and error and not to define the possibilities. The logical structure has to be arranged in a manner where the experiment has unexpected situations and possibilities. Above all, the researcher should not impose the conditions to reach a pre-defined conclusion. The machine must be designed to mimic human learning by trial and error, basing preferential actions on statistical results that increase the probability of success.

Agent based modeling appears to provide a promising method to examine, explore and even test criminological theories. There is a general consensus that criminal behavior is learned through interaction and perhaps on a trial and error basis [42]. Past actions that lead to undesirable outcome are not repeated and new avenues are explored. Agent based modeling can implement this conceptualization through laboratory experiments. The rational choice perspective [43] also suggests that offenders make an assessment of risk and gain, which guides their action. Such a ‘bounded rationality’ lends itself for reproduction in a machine environment through an agent based modeling method similar to the one outlined above [44]. Such a learning environment could also be extended to include more than one agent [45]. A feedback system could enable the agents to learn from one another through mutual interaction. Such a program can examine theories about group behavior; formation of mob and crowd control tactics to be used by the police. Agent based models constructed on Geographic Information System platform could be developed to present real life scenario and to analyze ways of handling large group of anti-social elements such as seen during London riots recently. Furthermore, gang culture; formation of special interest groups and spread of information within extended communities through modern communication are all arenas where machine learning through agent based modeling can find applications. The ability to explore options and validate concepts is perhaps the most significant possibility coming out of this technology.

References

  1. Brantingham PJ, Brantingham PL: Environmental Criminology. Sage, Beverley Hills, Ca.; 1981.

    Google Scholar 

  2. Rossmo K, Geographic Profiling: Target Patterns Of Serial Murderers PhD Thesis. Simon Fraser University, Burnaby; 1995.

    Google Scholar 

  3. Rengert GF, Piquero AR, Jones PR: Distance decay reexamined. Criminology 1999, 37(2):427–445. 10.1111/j.1745-9125.1999.tb00492.x

    Article  Google Scholar 

  4. Capone DL, Woodrow W Jr: Nichols, Urban Structure and Criminal Mobility. Am. Behav. Sci. 1976, 20: 119–2213.

    Article  Google Scholar 

  5. Cohen LE, Felson M: Social Change and Crime Rate Trends: A Routine Activity Approach. Am. Sociol. Rev. 1979, 44: 588–608. 10.2307/2094589

    Article  Google Scholar 

  6. Felson M: Crime and Nature. Sage, Thousand Oaks, CA; 2006.

    Google Scholar 

  7. Rossmo K: Geographic Profiling. CRC Press, Boca Raton, FL; 2000.

    Google Scholar 

  8. LeBeau JL: The methods and measures of centrography and the spatial dynamics of rape. J. Quant. Criminol. 1987, 3: 125–141. 10.1007/BF01064212

    Article  Google Scholar 

  9. Canter D, Larkin P: The environmental range of serial rapists. J. Environ. Psychol. 1993, 13: 663–669.

    Article  Google Scholar 

  10. Eck JE, Drugs Trips: Drug Offender Mobility. Paper presented at the 44th American Society of Criminology Annual Meetings, New Orleans; 1992.

    Google Scholar 

  11. Wiles P, Costello A: The ‘road to nowhere’: the evidence for travelling criminals, Home Office Research Study 207. Home Office, London; 2000.

    Google Scholar 

  12. Block R, Galary A, Brice D: The Journey to Crime: Victims and Offenders Converge in Violence Index Offences in Chicago. Secur. J. 2007, 20: 123–137. 10.1057/palgrave.sj.8350030

    Article  Google Scholar 

  13. Rand A: Patterns in juvenile delinquency: A spatial perspective. University Microfilms International Ann Arbor, MI; 1984.

    Google Scholar 

  14. Van Koppen PJ, de Keijser JW: Desisting distance decay: On the aggregation of individual crime trips. Criminology 1996, 35(3):505–515.

    Article  Google Scholar 

  15. Van Koppen PJ, Robert W, Jansen J: The road to the robbery: travel patterns in commercial robberies. Br. J. Criminol. 1998, 38(2):230–246. 10.1093/oxfordjournals.bjc.a014233

    Article  Google Scholar 

  16. Santtila P, Zappala A, Laukkanen M: Testing the utility of a geographic profiling approach in three rape series of a single offender: A case study. Forensic Sci. Int. 2003, 131(1):42–52. 10.1016/S0379-0738(02)00385-7

    Article  Google Scholar 

  17. Liu L, Eck J (Eds): Artificial Crime Analysis Systems: Using Computer Simulations and Geographic Information Systems. IGI Global, Pennsylvania; 2008.

    Google Scholar 

  18. Weisburd D, Piquero A: How Well Do Criminologists Explain Crime? Statistical Modeling in Published Studies. Crime and Justice 2008, 17: 453–502.

    Article  Google Scholar 

  19. Hedström P: Dissecting the Social: On the Principles of Analytical Sociology. Oxford University Press, UK; 2005.

    Book  Google Scholar 

  20. Miller JH, Page SE: Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Princeton University Press, Princeton Studies in Complexity; 2007.

    Google Scholar 

  21. Verma A, Ramyaa R, Marru S, Fan Y, Singh R: Rationalizing Police Patrol Beats using Voronoi Tesellations. Proceedings of the IEEE International Conference on Intelligence and Security Informatics, Vancouver BC, Canada; 2010.

    Google Scholar 

  22. Brantingham PL, Brantingham PJ: Computer simulation as a tool for environmental criminologists. Security Journal 2004, 17(1):21–30. 10.1057/palgrave.sj.8340159

    Article  Google Scholar 

  23. Malleson N: Agent Based Modeling of Burglary PhD Thesis. School of Geography, The University of Leeds, UK; 2010.

    Google Scholar 

  24. Tsang HH, Park AJ, Sun M, Glasser U: GENIUS: A Computational Modeling Framework for Counter-Terrorism Planning and Response. Proc. IEEE Intelligence and Security Informatics Vancouver BC, Canada; 2010.

    Google Scholar 

  25. Alimadad A, Borwein P, Brantingham PL, Brantingham PJ, Dabbaghian-Abdoly V, Ferguson R, Fowler E, Ghaseminejad AH, Giles C, Li J, Pollard N, Rutherford A, Waal A: Utilizing Simulation Modeling for Criminal Justice System Analysis. In Artificial Crime Analysis. Edited by: Liu L, Eck J. Idea Group Publishing, Hershey, PA; 2009:372–411.

    Google Scholar 

  26. Dray A, Mazerolle L, Perez P, Ritter A: Policing Australia’s heroin drought: using an agent-based model to simulate alternative outcomes. J. Exp. Criminol. 2008, 4(3):267–287. 10.1007/s11292-008-9057-1

    Article  Google Scholar 

  27. Liu L, Wang X, Eck J, Liang J: Simulating crime events and crime patterns in a RA/CA models. In Geographic Information Systems and Crime Analysis. Edited by: Wang F. Idea Publishing, Reading, PA; 2005:197–213.

    Chapter  Google Scholar 

  28. Groff ER: Simulation for Theory Testing and Experimentation: An Example Using Routine Activity Theory and Street Robbery. J. Quanti. Criminol. 2007, 23: 75–103. 10.1007/s10940-006-9021-z

    Article  Google Scholar 

  29. Axelrod R: Advancing the art of simulation in the social sciences. In Simulating Social Phenomena. Edited by: Conte R, Hegselmann R, Terna P. Springer, Berlin; 1997:21–40.

    Chapter  Google Scholar 

  30. Johnson SD: Repeat Burglary Victimization: A tale of two theories. J. Exp. Criminol. 2008, 4: 215–240. 10.1007/s11292-008-9055-3

    Article  Google Scholar 

  31. Eck J, Liu L: Contrasting simulated and empirical experiments in crime prevention. J. Exp. Criminol. 2008, 4: 195–213. 10.1007/s11292-008-9059-z

    Article  Google Scholar 

  32. Bonabeau E: Agent-based modeling: Methods and techniques for simulating human systems. Proc. Natl. Acad. Sci. 2002, 99: 7280–7287. 10.1073/pnas.082080899

    Article  Google Scholar 

  33. Moss S, Edmonds B: Towards good social science. J. Artif. Soc. Soc. Simul. 2005, 8(4):13. http://jasss.soc.surrey.ac.uk/8/4/13.html

    Google Scholar 

  34. Joshua M: Epstein, Robert Axtell, Growing Artificial Societies Social Science From the Bottom Up. Brookings Institution Press and MIT Press, Cambridge, MA; 1996.

    Google Scholar 

  35. Doran J: Simulating collective mis-belief. J. Artif. Soc. Soc. Simul. 1998., 1(1): http://jasss.soc.surrey.ac.uk/1/1/3.html

    Google Scholar 

  36. Epstein JM: Modeling civil violence: An agent-based computational approach. Proc. Natl. Acad. Sci. 2002, 99(3):7243–7250. 10.1073/pnas.092080199

    Article  Google Scholar 

  37. Brantingham PL, Brantingham PJ, Vajihollahi M, Wuschke K: Crime analysis at multiple scales of aggregation: A topological approach. In Putting Crime in its Place: Units of Analysis in Geographic Criminology. Edited by: Weisburd D, Bernasco W, Bruinsma G. Springer, New York; 2009.

    Google Scholar 

  38. Berk R: How you can tell if the simulations in computational criminology are any good? J. Exp. Criminol. 2008, 4: 289–308. 10.1007/s11292-008-9053-5

    Article  Google Scholar 

  39. Brantingham PL, Brantingham PJ: Patterns in Crime. Macmillan, New York; 1984.

    Google Scholar 

  40. Bernasco W, Nieuwbeerta P: How do residential burglars select target areas? Br. J. Criminol. 2005, 45(3):296–315.

    Article  Google Scholar 

  41. Bernasco W: A sentimental journey to crime: Effects of residential history on crime location choice. Criminology 2010, 48: 389–416. 10.1111/j.1745-9125.2010.00190.x

    Article  Google Scholar 

  42. Akers RL: Social Learning and Social Structure: A General Theory of Crime and Deviance. Northeastern University Press, Boston; 1998.

    Google Scholar 

  43. Clarke RV, Cornish DB: Rational Choice. In Explaining Criminals and Crime. Edited by: Paternoster R, Bachman R. Roxbury, Los Angeles; 2001:23–42.

    Google Scholar 

  44. O’Sullivan D, Haklay M: Agent-based models and individualism: Is the world agent-based? Environ. Plann. 2000, 32(8):1409–1425. 10.1068/a32140

    Article  Google Scholar 

  45. Brantingham PL, Glasser U, Kinney B, Singh K, Vajihollahi M, Modeling Urban Crime Patterns: Viewing Multi-Agent Systems as Abstract State Machines. Edited by: Beauquier D, Börger E, Slissenko A. Proc. 12th International Workshop on Abstract State Machines, Paris; 2005.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arvind Verma.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contribution

AV conceptualized the research and designed the basic structure of the experiment. He also did the literature review and writing of the article. SM helped set up the theme for computer simulation and design of the analysis. RR designed and carried out the simulation and the production of the figures and results. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Verma, A., Ramyaa, R. & Marru, S. Validating distance decay through agent based modeling. Secur Inform 2, 3 (2013). https://doi.org/10.1186/2190-8532-2-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8532-2-3

Keywords