0% found this document useful (0 votes)
45 views

Network

This document discusses spatial interaction models and how they can be constrained to observed origin, destination, or both origin and destination totals. Constraining models allows exploring scenarios while keeping certain parameters like incomes or job numbers constant. Examples given include retail models constrained to origin incomes, and transportation models constrained to destination job numbers to estimate commute patterns from new developments.

Uploaded by

morris gichuhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Network

This document discusses spatial interaction models and how they can be constrained to observed origin, destination, or both origin and destination totals. Constraining models allows exploring scenarios while keeping certain parameters like incomes or job numbers constant. Examples given include retail models constrained to origin incomes, and transportation models constrained to destination job numbers to estimate commute patterns from new developments.

Uploaded by

morris gichuhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

INTRODUCTION

This paper aims to understand topological closeness that is automatically normalized introduces

a full family of spatial interaction models of which the unconstrained model is just the start. And

indeed since then, there have been all number of incremental advances and alternatives (such as

Stewart Fotheringham’s Competing Destinations models, Pooler’s production/attraction/cost

relaxed models, Stillwell’s origin/destination parameter specific models and Dennett and

Wilson’s own multi-level model (to name just a few).In this session we will explore the rest of

Wilson’s family - the Production (origin) Constrained Model; the Attraction (destination)

constrained model; and the Doubly Constrained Model. We will see how we can, again, use a

Poisson regression model in R to calibrate these models and how, once calibrated, we can use the

models in different contexts, such as Land Use Transportation Interaction (LUTI) modelling,

retail modelling and migration modelling.

Topological Network

We first need to go to the following website to extract the shape file for the bit of street network

that we need http://extract.bbbike.org. We need to break the line strings up into individual

segments. A simple approach is to loop over the geo data frame rows and then over each line

string. There are 55 disconnected components, so let's just take the largest one. We Create a

weighted graph from the ldn_segs geo data frame althoughThere seem to be some duplicated

edges, so we use a multigraph. networkx is bad at drawing large graphs, so we will be using

graphviz to make a lawet. matplotlib finds this hard to draw, expect it to take some time!
We now create some coordinates to use for plotting and convert the points geo data frame info

to a dictionary of positions
Centrality measures

The next to do is computing closeness centrality - takes too long. Unfortunately networkx doesn't

implement a cutoff like igraph does. So we do Borrowing clos_osm from python igraph and the

plot is as shown below


Impact measures
Going forward we visualize how many nodes of each there are and degree of distribution

As labelled in the histogram points from 0, we can use these directly as node indices! By adding

vertex x and y positions - times y by -1 to flip for Cairo plotting. We Finally add edge weight. If

we plot the network with a graph lawet, we won't get any sense of what it is , so we added 'x' and

'y' coordinates to the vertices, igraph knowns how to plot spatially. Now get rgba values, but

throw away alpha, and map to a list of tuples and a degree of distribution is plotted
Nodal removal

Nodes are removed according to a selected property: e.g. degree, closeness centrality and

betweenness centrality and about the effects on the network of removing such nodes which have

more impact on the network first we add in some geographic locations for this plot and tidy up

the nodes and multiply y-axis coordinates by -1 as Cairo plots from top left, not bottom left.

Some pairs of stations appear multiple times, e.g. node 11 - due to the different lines that pass

through the station To get all instances, due to issues with spaces etc. we use pandas which does

this nicely with string functions. means that the pattern 'Baker' must occur at the beginning of

the text e.g. 'B


The Girvan Newman method is based on computing betweenness in an iterative way, hence it is

extremely slow for large networks.

Assign group ids to groups found by the fast greedy modularity algorithm. Assign group ids to

node ids as dictionary Finally, assign to nodes

Flows: weighted network


Spatial interaction models

Models and calibration


The tube network using the defaults - a spring lawet converts the stations info to a dictionary of

positions assigning spatial locations to the points and it Now looks like the original shapefile

plot!

Compute betweenness - for multigraphs networkx only computes the topological betweennes .

This happens even if the weight parameter is set ls add the names of the stations to the graph our

multigraph will have several pairs stations connected by multiple lines. We can simplify plo the

Graph to remove duplicate edges. If we want to calculate weighted betweenness, specify weight,

the default is none.Adding distances may change shortest paths compared to the topological case

where each Compute betweenness - returns a dictionary

bet_london = nx.betweenness_centrality(g_london, weight = 'distance')

# make bet_london a pandas dataframe for ease

bet_london = pd.DataFrame.from_dict(bet_london, orient='index')

f, ax = plt.subplots(figsize=(16,10))

nx.draw(g_london,pos=pos_bng, ax = ax, node_size = bet_london/bet_london.max()*100,


cmap ax.set_aspect('equal')

The stations represent the possible nodes we have to cast the ids as strings as otherwise they get

assigned as node ids, which take the edge list and add edges. Agin as string to match names.

Finally add edge weights and Now, a basic plot. Now we want to extract the actual station

locations from these data. We'd like the id and name of each station, and the starting point of

the line representing the connection. Here's an approximation of the way which uses tidy() from

broom.
Scenarios

Now that we have calibrated our parameters and produced some estimates, we can start

to play around with some what-if scenarios. In a 'what if' scenario, we make the

assumption that the parameters for alpha and beta are universal (that is - they don't

change subject to circumstance), and we use this model as a basis for exploring different

scenarios by changing other data in the model, such as the observed $W_{j}$ values,

here median income at the destination (the attractive force), or the cost of travelling

between two places. As we're using straight-line distance to equal cost here it doesn't

make a great deal of sense to change that in our model though! So, by way of example -

What if the government invested loads of money into a new Car Plant in Barking and

Dagenham and as a result, average wages increased from a mere £16,200 to


£25,000. A far fetched scenario, but one that could make a good experiment.If we

recall, the estimates from our unconstrained model, none of the estimates summed to

the observed in and out-flow totals. Our estiamtes only summed to the grand total of

flows, but this is because we were really fitting a ‘total constrained’ model which used

$k$ - the constant of proportionality - to ensure everything more or less added up

(subject to rounding errors).

# Finally, the goodness of fit for the attraction


constrained model print "R squared =",
calcR2(cdatasub['Total'],cdatasub['attrsimfitted']) print "RMSE
=", calcRMSE(cdatasub['Total'],cdatasub['attrsimfitted'])

There are, of course, plenty of things we could try out. For example:

we could try mapping the coefficients or the residual values from the model to see if

there is any patterning in either the over or under prediction of flows. we could try

running wer own version of a LUTI model by first calibrating the model parameters and

plugging these into a multiplicative version of the model, adjusting the destination

constraints to see which origins are likely to generate more trips.

Where we have a full flow matrix to calibrate parameters, we can incorporate

the row (origin) totals, column (destination) totals or both origin and

destination totals to constrain our flow estimates to these known values.There

are various reasons for wanting to do this, for example:

1. If we are interested in flows of money into businesses or

customers into shops, we might have information on the amount of

disposable income and shopping habits of the people living in


different areas from loyalty card data. This is known information

about our origins and so we could constrain our spatial interaction

model to this known information - we can make the assumption

that this level of disposable income remains the same. We can then

use other information about the attractiveness of places these

people might like to shop in (store size, variety / specialism of

goods etc.), to estimate how much money a new store opening in

the area might make, or if a new out-of-town shopping centre

opens, how much it might affect the business of shops in the town

centre. This is what is known in the literature as the ‘retail model’

and is perhaps the most common example of a Production (origin)

Constrained Spatial Interaction Model.

2. We might be interested in understanding the impact of a large new

employer in an area on the local flows of traffic or on the demand

for new worker accommodation nearby. A good example of where

this might be the case is with large new infrastructure

developments like new airports. For example, before the go-ahead

for the new third runway at Heathrow was given, one option being

considered was a new runway in the Thames Estuary. If a new

airport was built here, what would be the potential impact on

transport flows in the area and where might workers commute

from? This sort of scenario could be tested with an Attraction


(destination) Constrained Spatial Interaction Model where the

number of new jobs in a destination is known (as well as jobs

in the surrounding area) and the model could be used to estimate

where the workers will be drawn from (and their likely travel-to-

work patterns).

3. We might be interested in understanding the changing patterns of

commuting or migration over time. Data from the Census allows

us to know an accurate snap-shot of migrating and commuting

patterns every 10 years. In these full data matrices, we know both

the numbers of commuters/migrants leaving origins and arriving at

destinations as well as the interactions between them. If we

constrain our model estimates to this known information at origin

and destination, we can examine various things, including:

A. the ways that the patterns of commuting/migration differ

from the model predictions - where we might get more

migrant/commuter flows than we would expect

B. how the model parameters vary over time - for example how

does distance / cost of travel affect flows over time? Are

people prepared to travel further or less far than before?

You might also like