Skip to content

Commit

Permalink
Feature public transport assignment
Browse files Browse the repository at this point in the history
* file transfer from the edsger package: shortest hyperpath algorithm

* first try on compiling the public transport module

* added the COO to CSC sparse format converter

* Python interface to the Cythonized Spiess & Florian algorithm

* assignment loop

* just making sure zero entries are removed from the demand matrix

* Spiess & Florian test case

* assign method S&F test

* added some compiler directives

* combined cdefs

* moved the allocation of f_i_vec outside of compute_SF_in

* cleanup

* moved the allocation of u_j_c_a_vec outside of compute_SF_in

* cleanup

* removed np.sum()

* moved the allocation of u_i_vec outside of compute_SF_in

* added a wrapper of the C function nqsort, in order to replace np.argsort

* moved the allocation of edges_indices outside of compute_SF_in

* released the gil in compute_SF_in

* comment fix

* added the initialization of the edge volume vector

* Attempt parallelising PuT assignment

* Further progress towards parallelisation

* initialized the module for the S&F transit graph creation

* handle the case of int data given as a float string

* renaming

* Complete parallelisation of PuT assignment over destinations

* u_j_c_a_vec was being initialised to an incorrect size

* Move parallelised call to compute_SF_in to separate function

Introduced new accumulation buffer and dropped reacquisition of the gil

* Use numpy asserts for nicer error messages

Using default tolerances of previous asserts

* Restore HyperpathGenerating.run() functionality

* Style

* create stop vertices

* renamed a column

* Fix openmp Windows

* Spelling and default to using all available threads

* Fix default thread amount

* Move hyperpath testing and adjust testing structure

Adds new parallel agreement test as well

* Credit Bell's network construction

* added the class global structure

* using nullable Int type

* filtering stops on a given time period

* filtering patterns, routes ans building a line segment dataframe

* bug fix

* bug fix + filtering the stops corresponding to the given time range

* removed a comment : segment index are 0-based and not 1-based as specifed before

* created a method for the line segments dataframe + now creating the boarding vertices dataframe

* creating the amighting vertices dataframe

* concatenating all the vertex dataframe into a single one

* computation the mean travel time for each (pattern, seq) couple

* fixes the segment travel time computation

* now creating the on-board edges

* Add create_od_vertices()

* Use sql over shape file, data must be mirgated to sqlite first.

* Add creation of connector edges

* Add attempt at dwell and inner transfer edges

* Add missing import

* small update

* temporarily using 2 sqlite connections in the parameters

* renaming within the SQL query

* dwell edges creation

* small code simplifcation

* alighting edges creation

* cleanup

* wip

* compute the mean headway

* renaming

* filling travel time missing values with travel times computed over all the trips_schedule (now limited to the given time range)

* fixed the mean headway computation part

* variable renaming

* boarding edges creation

* concatenate the edges of different types

* added a projected CRS to compute distances

* now merging within the SQL

* cleanup

* updated the connector edges creation

* creation of the inner stop transfer edges

* ability to add spatial noise to boarding and alighting vertex coordinates in order to visualize boarding, aligthing, transfer and collocated on-board edges

* creation of the outer stop transfer edges

* cleanup + divided the connector type into 2 types: access and egress

* added a boolean argument to activate the creation of the outer stop transfer edges

* creating some walking edges between the stops of a station

* Add overlapping regions for connectors construction.

Fix references to graph instead of self

* Fix missing edges, add max distance

* Fix memory blowout by joining on null and empty strings entries

* wip

* renaming + merge

* typo

* casting all stop and parent_station ids to text in sql queries

* Using consistent data types during graph transit creation

* fixed the cython compilation : added a noexcept in the declaration of _compare

* revert to the sequential assignment algorithm (in order to compile)

* comments + initialization

* Add preliminary db saving

Usage:
```python
graph = SF_graph_builder(pt_con, prj_con)
graph.create_vertices()
graph.create_edges()
graph.create_additional_db_fields()
graph.save_vertices()
graph.save_edges()
```

Caveats are noted in the comments of the functions.

* Add shifting for duplicated geometry when saving

* IDs should be index from 1

Allows the whole graph to be read back into aequilibrae

* Rename terms to be more consistent with AequilibraE, use wkb as well

We know use WKB format consistently and only convert out when need. String manipulation and regexs
have also been replaced.

type -> node_type, link_type
head_vert_id -> a_node
tail_vert_id -> b_node
coord -> geometry
vert_id -> node_id
edge_id -> link_id

* Copy over SQL specs for move from proj db to transit storage

* Mirror more triggers and tables into transit db

Includes files copied from network triggers with slight modifications

Need to remove unnecessary columns from links and nodes tables.

Zones are still saved to project db and also need to be migrated.

* renamed some variables

* moved the haversine distance function to the utils submodule

* comments

* Add custom loading of zones to drop dependence on project db

* revert back to the parallel version

* added Jake's fix to support Cython 3

* some code cleaning in the travel time computation

* Style

* cleanup and refactoring

* Add direction and to aeq graph conversion

* Integrate results saving for hp assignment

* typo

* Misc post merge fixes and formatting

* wip

* Add missing results.sql

* SQL query update

* comments

* sql update + public attributes made private

* More consistent datatypes

This does include a change to make stop_ids TEXT with sqlite. I haven't full explored the effects of this change.

* tried to fix an issue with some inverted tail and head IDs for access and egress connectors

* set the edge direction attribute to 1 for all edges (all edges are directed)

* quick fix to drop duplicated transfer edges

* now creating distinct origin and destination nodes

* now supporting the blocking_centroid_flows mode when creating the transit graph, where the origin and destination nodes are distinct

* added a method to convert taz ids to node ids in the demand matrix

* added the max_connectors_per_zone parameter to select at most k connectors per zone with smallest travel time

* Scipy 1.6 is 2 years old, prefer KDTree over the cKDTree alias

* Add initial connect project matching

* Allow troublesome triggers to be disabled

* Improvements on the project line string matching. Not complete

* Project matching progress

* Mimic changes in #455

* fixup! Allow troublesome triggers to be disabled

* PathResults.compute_path fix stale variables

Previously if a path that exists was computed, and then a path that does not exist was attempted, the old path variables
were not cleared.

* Working project matching

* Use existing line string data for a better result

* Refactor project matching into smaller method, add more detailed doc

* Misc doc changes + include trigger_settings in db creation

* Allow supplying zones as shapely objects, by default add zone

* Allow specifying the graph to match against

* Style

* Adjust to_aeq method to reflect other changes

* Use the constructed RNG generator

* Documentation overhaul

Format doc strings to RST. HTML generated docs look correct. Not sure how to set the class included in the TOC.

* Pandas warning fix

* Add Coquimbo public transit assignment example

* fixup! Add Coquimbo public transit assignment example

* updated the docstrings and default values

* include SF_graph_builder in the API documentation

* made many methods protected

* updated the graph builder docstring

* bug fix in the case where there is no transfer edges

* uniform argument naming when calling pd.read_sql

* bug fix dealing with the case of creating walking edges without any parent station

* used a newer pandas method to remove future warning

* new documentation about transit graph creation

* images associated with the transit graph documentation

* structure of a new documentation page

* completed the hyperpath documentation page

* more documentation (HyperpathGenerating)

* fix: added dwell edges

* PT Integration (#476)

* Models with no centroids (#472)

* Bumps up version for release

* Bumps up version for release

* docs

---------

Co-authored-by: pveigadecamargo <pveigadecamargo@anl.gov>

* Better network skimming and path computation headers and typo fix (#474)

* Integration scaffolding

* Transit assignment

* Change "id" variable naming to avoid double underscores

Double underscore variables are difficult to inherit without duplicating. I've tried my best to group all these renames
into a single commit to be revertible but some others are deep within a refactor.

This changes the convention of using `__id__` and `__graph_id__` to `_id` and `_graph_id`. Since these variables need to
be accessible to other methods they are not private. The double underscore worked previously since there was only ever
one class with these variables the name mangling was always correct.

* Create GraphBase and TransitGraph classes

* Create TransportClassBase and TransitClass classes

* Create AssignmentBass and TransitAssignment classes

* Create AssignmentResultsBase and AssignmentResultsTransit classes

* Comments, add OptimalStrategies class

* Incomplete (full) workflow

* fixup! Incomplete (full) workflow

* fixes export of omx matrices (#484)

Co-authored-by: pveigadecamargo <pveigadecamargo@anl.gov>

* Move periods table

* Lots of changes

Limit graph loading and saving to a single graph for now
Flush out results API, including saving and loading
Move unrelated variables and methods

* Add Periods and Period classes, similar to Nodes and Links classes

* Remove period from database saving for now

* Propagate period ids and various changes

* Raise nicer errors

* Style

* removes use of necessary sqlite3 connection cursors and cached connections (#478)

* removes use of necessary sqlite3 connection cursors

* removes use of necessary sqlite3 connection cursors and saved connections

* removes silly use of a cursor

* improves on connection styles

* improves on connection styles

* improves on connection styles

* improves on connection styles

* improves on connection styles

* improves on connection styles

* improves on connection styles

* improves on connection styles

* improves on connection styles

* improves on connection styles

* improves on connection styles

---------

Co-authored-by: pveigadecamargo <pveigadecamargo@anl.gov>

* Migrate docs to new api, fix reloading graphs and begin updating tests.

* Style and build errors

* Typing

* Revert changes to HyperpathGenerating and tests, use arrays instead

* Exclude abstract methods from coverage tests

* Period, Periods, TransitGraph, and TransitGraphBuilder tests

* Style

* Cleans graph creation (#482)

Co-authored-by: pveigadecamargo <pveigadecamargo@anl.gov>

* Adds support page (#481)

* updates examples

* updates examples

* Lower coverage requirement, typos

* Remove patches

* updates examples

* fixup! Remove patches

* Missing var

* Deprecates Python 3.7

* Fixes documentation

* Fixes documentation

* code cleaning

* code cleaning

* code cleaning

* updates setup classifiers

* Fixes centroid assignment

---------

Co-authored-by: Pedro Camargo <c@margo.co>
Co-authored-by: pveigadecamargo <pveigadecamargo@anl.gov>

* renaming

---------

Co-authored-by: djfrancesco <pacullfrancois@gmail.com>
Co-authored-by: Jake Moss <jake.moss@uqconnect.edu.au>
Co-authored-by: François Pacull <35044397+djfrancesco@users.noreply.github.com>
Co-authored-by: pveigadecamargo <pveigadecamargo@anl.gov>
  • Loading branch information
5 people committed Dec 16, 2023
1 parent 5ce7941 commit e6e7334
Show file tree
Hide file tree
Showing 101 changed files with 6,518 additions and 340 deletions.
5 changes: 4 additions & 1 deletion .coveragerc
@@ -1,3 +1,6 @@
[report]
fail_under = 81.0
fail_under = 75
show_missing = True
exclude_lines =
pragma: no cover
@abstract
38 changes: 0 additions & 38 deletions .github/tests_linux.yml

This file was deleted.

2 changes: 1 addition & 1 deletion .github/workflows/build_linux.yml
Expand Up @@ -21,7 +21,7 @@ jobs:
- name: Build manylinux Python wheels
uses: RalfG/python-wheels-manylinux-build@v0.7.1
with:
python-versions: 'cp37-cp37m cp38-cp38 cp39-cp39 cp310-cp310 cp311-cp311 cp312-cp312'
python-versions: 'cp38-cp38 cp39-cp39 cp310-cp310 cp311-cp311 cp312-cp312'
pip-wheel-args: '--no-deps'

- name: Moves wheels
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_mac.yml
Expand Up @@ -14,7 +14,7 @@ jobs:
continue-on-error: true
strategy:
matrix:
python-version: [ '3.7', '3.8', '3.9', '3.10', '3.11', '3.12']
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
steps:
- uses: actions/checkout@v3
- name: Set Python environment
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_windows.yml
Expand Up @@ -10,7 +10,7 @@ jobs:
continue-on-error: true
strategy:
matrix:
python-version: [ '3.7', '3.8', '3.9', '3.10', '3.11', '3.12']
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
architecture: ['x64']
steps:
- uses: actions/checkout@v3
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/debug_tests.yml
Expand Up @@ -6,7 +6,7 @@ jobs:

testing:
runs-on: ubuntu-20.04
container: python:3.7
container: python:3.9
steps:
- uses: actions/checkout@v3
- name: Install dependencies
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test_linux_with_coverage.yml
Expand Up @@ -9,7 +9,7 @@ jobs:
HAS_SECRETS: ${{ secrets.AWS_SECRET_ACCESS_KEY != '' }}
strategy:
matrix:
python-version: [3.9]
python-version: [3.10]
steps:
- uses: actions/checkout@v3
- name: Install dependencies
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/unit_tests.yml
Expand Up @@ -31,7 +31,7 @@ jobs:
runs-on: ${{ matrix.os}}
strategy:
matrix:
python-version: [ '3.7', '3.8', '3.9', '3.10', '3.11', '3.12']
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
os: [windows-latest, ubuntu-latest]

max-parallel: 20
Expand Down
6 changes: 3 additions & 3 deletions __version__.py
@@ -1,5 +1,5 @@
version = 0.9
minor_version = "6"
release_name = "Queluz"
version = 1.0
minor_version = "0"
release_name = "Rio de Janeiro"

release_version = f"{version}.{minor_version}"
5 changes: 3 additions & 2 deletions aequilibrae/paths/AoN.pyx
Expand Up @@ -187,7 +187,7 @@ def path_computation(origin, destination, graph, results):
dest = destination
origin_index = graph.nodes_to_indices[orig]
dest_index = graph.nodes_to_indices[dest]
if results.__graph_id__ != graph.__id__:
if results._graph_id != graph._id:
raise ValueError("Results object not prepared. Use --> results.prepare(graph)")


Expand Down Expand Up @@ -317,6 +317,7 @@ def path_computation(origin, destination, graph, results):
results.path_link_directions = None
results.milepost = None


def update_path_trace(results, destination, graph):
# type: (PathResults, int, Graph) -> (None)
"""
Expand Down Expand Up @@ -391,7 +392,7 @@ def skimming_single_origin(origin, graph, result, aux_result, curr_thread):
origin_index = graph.compact_nodes_to_indices[orig]

graph_fs = graph.compact_fs
if result.__graph_id__ != graph.__id__:
if result._graph_id != graph._id:

raise ValueError("Results object not prepared. Use --> results.prepare(graph)")

Expand Down
6 changes: 3 additions & 3 deletions aequilibrae/paths/__init__.py
Expand Up @@ -4,10 +4,10 @@
from aequilibrae.paths.network_skimming import NetworkSkimming
from aequilibrae.paths.all_or_nothing import allOrNothing
from aequilibrae.paths.assignment_paths import AssignmentPaths
from aequilibrae.paths.traffic_class import TrafficClass
from aequilibrae.paths.traffic_assignment import TrafficAssignment
from aequilibrae.paths.traffic_class import TrafficClass, TransitClass
from aequilibrae.paths.traffic_assignment import TrafficAssignment, TransitAssignment
from aequilibrae.paths.vdf import VDF
from aequilibrae.paths.graph import Graph
from aequilibrae.paths.graph import Graph, TransitGraph

from aequilibrae import global_logger

Expand Down
2 changes: 1 addition & 1 deletion aequilibrae/paths/all_or_nothing.py
Expand Up @@ -39,7 +39,7 @@ def __init__(self, matrix, graph, results):
self.report = []
self.cumulative = 0

if results.__graph_id__ != graph.__id__:
if results._graph_id != graph._id:
raise ValueError("Results object not prepared. Use --> results.prepare(graph)")

elif matrix.matrix_view is None:
Expand Down
38 changes: 27 additions & 11 deletions aequilibrae/paths/graph.py
@@ -1,15 +1,18 @@
from os.path import join
import pickle
import uuid
from abc import ABC
from datetime import datetime
from os.path import join
from typing import List, Tuple, Optional

import numpy as np
import pandas as pd
from aequilibrae.context import get_logger
from aequilibrae.paths.AoN import build_compressed_graph

from aequilibrae.context import get_logger


class Graph(object):
class GraphBase(ABC):
"""
Graph class
"""
Expand Down Expand Up @@ -75,7 +78,7 @@ def __init__(self, logger=None):
self.g_link_crosswalk = np.array([]) # 4 a link ID in the BIG graph, a corresponding link in the compressed 1

# Randomly generate a unique Graph ID randomly
self.__id__ = uuid.uuid4().hex
self._id = uuid.uuid4().hex

def default_types(self, tp: str):
"""
Expand Down Expand Up @@ -131,7 +134,7 @@ def prepare_graph(self, centroids: Optional[np.ndarray]) -> None:
}
)

properties = self.__build_directed_graph(self.network)
properties = self._build_directed_graph(self.network, self.centroids)
self.all_nodes, self.num_nodes, self.nodes_to_indices, self.fs, self.graph = properties

# We generate IDs that we KNOW will be constant across modes
Expand All @@ -152,7 +155,7 @@ def __build_compressed_graph(self):
# We build a groupby to save time later
self.__graph_groupby = self.graph.groupby(["__compressed_id__"])

def __build_directed_graph(self, network: pd.DataFrame):
def _build_directed_graph(self, network: pd.DataFrame, centroids: np.ndarray):
all_titles = list(network.columns)

not_pos = network.loc[network.direction != 1, :]
Expand Down Expand Up @@ -187,8 +190,8 @@ def __build_directed_graph(self, network: pd.DataFrame):

# Now we take care of centroids
nodes = np.unique(np.hstack((df.a_node.values, df.b_node.values))).astype(self.__integer_type)
nodes = np.setdiff1d(nodes, self.centroids, assume_unique=True)
all_nodes = np.hstack((self.centroids, nodes)).astype(self.__integer_type)
nodes = np.setdiff1d(nodes, centroids, assume_unique=True)
all_nodes = np.hstack((centroids, nodes)).astype(self.__integer_type)

num_nodes = all_nodes.shape[0]

Expand Down Expand Up @@ -238,7 +241,7 @@ def exclude_links(self, links: list) -> None:
if self.centroids is not None:
self.prepare_graph(self.centroids)
self.set_blocked_centroid_flows(self.block_centroid_flows)
self.__id__ = uuid.uuid4().hex
self._id = uuid.uuid4().hex

def __build_column_names(self, all_titles: List[str]) -> Tuple[list, list]:
fields = [x for x in self.required_default_fields]
Expand Down Expand Up @@ -382,7 +385,7 @@ def save_to_disk(self, filename: str) -> None:
mygraph["skim_fields"] = self.skim_fields
mygraph["block_centroid_flows"] = self.block_centroid_flows
mygraph["centroids"] = self.centroids
mygraph["graph_id"] = self.__id__
mygraph["graph_id"] = self._id
mygraph["mode"] = self.mode

with open(filename, "wb") as f:
Expand Down Expand Up @@ -412,7 +415,7 @@ def load_from_disk(self, filename: str) -> None:
self.skim_fields = mygraph["skim_fields"]
self.block_centroid_flows = mygraph["block_centroid_flows"]
self.centroids = mygraph["centroids"]
self.__id__ = mygraph["graph_id"]
self._id = mygraph["graph_id"]
self.mode = mygraph["mode"]
self.__build_derived_properties()

Expand Down Expand Up @@ -483,3 +486,16 @@ def save_compressed_correspondence(self, path, mode_name, mode_id):
self.graph.to_feather(graph_path)
node_path = join(path, f"nodes_to_indices_c{mode_name}_{mode_id}.feather")
pd.DataFrame(self.nodes_to_indices, columns=["node_index"]).to_feather(node_path)


class Graph(GraphBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)


class TransitGraph(GraphBase):
def __init__(self, config: dict = None, od_node_mapping: pd.DataFrame = None, *args, **kwargs):
super().__init__(*args, **kwargs)
self._config = config
self.od_node_mapping = od_node_mapping
self.mode = "t"
2 changes: 1 addition & 1 deletion aequilibrae/paths/graph_building.pyx
Expand Up @@ -176,7 +176,7 @@ def build_compressed_graph(graph):

df = pd.concat([df, comp_lnk])
df = df[["id", "link_id", "a_node", "b_node", "direction"]]
properties = graph._Graph__build_directed_graph(df) # FIXME: Don't circumvent name mangling
properties = graph._build_directed_graph(df, graph.centroids)
graph.compact_all_nodes = properties[0]
graph.compact_num_nodes = properties[1]
graph.compact_nodes_to_indices = properties[2]
Expand Down

0 comments on commit e6e7334

Please sign in to comment.