ecoscope.base#

Submodules#

Package Contents#

class ecoscope.base.RelocsCoordinateFilter[source]#

Filter parameters for filtering get_fixes based on X/Y coordinate ranges or specific coordinate values

min_x: float#
max_x: float = 180.0#
min_y: float#
max_y: float = 90.0#
filter_point_coords: List[List[float]] | geopandas.GeoSeries#
__post_init__()[source]#
class ecoscope.base.RelocsDateRangeFilter[source]#

Filter parameters for filtering based on a datetime range

start: datetime.datetime#
end: datetime.datetime#
class ecoscope.base.RelocsDistFilter[source]#

Filter based on the distance between consecutive fixes. Fixes are filtered to the range [min_dist_km, max_dist_km].

min_dist_km: float = 0.0#
max_dist_km: float#
temporal_order: str = 'ASC'#
class ecoscope.base.RelocsSpeedFilter[source]#

Filter parameters for filtering based on the speed needed to move from one fix to the next

max_speed_kmhr: float#
temporal_order: str = 'ASC'#
class ecoscope.base.TrajSegFilter[source]#

Class filtering a set of trajectory segment segments

min_length_meters: float = 0.0#
max_length_meters: float#
min_time_secs: float = 0.0#
max_time_secs: float#
min_speed_kmhr: float = 0.0#
max_speed_kmhr: float#
class ecoscope.base.EcoDataFrame(data=None, *args, **kwargs)[source]#

Bases: geopandas.GeoDataFrame

EcoDataFrame extends geopandas.GeoDataFrame to provide customizations and allow for simpler extension.

property _constructor#
Used when a manipulation result has the same dimensions as the
original.
__getitem__(key)[source]#

If the result is a column containing only ‘geometry’, return a GeoSeries. If it’s a DataFrame with any columns of GeometryDtype, return a GeoDataFrame.

classmethod from_file(filename, **kwargs)[source]#

Alternate constructor to create a GeoDataFrame from a file.

It is recommended to use geopandas.read_file() instead.

Can load a GeoDataFrame from a file in any format recognized by fiona. See http://fiona.readthedocs.io/en/latest/manual.html for details.

Parameters:
  • filename (str) – File path or file handle to read from. Depending on which kwargs are included, the content of filename may vary. See http://fiona.readthedocs.io/en/latest/README.html#usage for usage details.

  • kwargs (key-word arguments) – These arguments are passed to fiona.open, and can be used to access multi-layer data, data stored within archives (zip files), etc.

Examples

>>> import geodatasets
>>> path = geodatasets.get_path('nybb')
>>> gdf = geopandas.GeoDataFrame.from_file(path)
>>> gdf  
   BoroCode       BoroName     Shape_Leng    Shape_Area                                           geometry
0         5  Staten Island  330470.010332  1.623820e+09  MULTIPOLYGON (((970217.022 145643.332, 970227....
1         4         Queens  896344.047763  3.045213e+09  MULTIPOLYGON (((1029606.077 156073.814, 102957...
2         3       Brooklyn  741080.523166  1.937479e+09  MULTIPOLYGON (((1021176.479 151374.797, 102100...
3         1      Manhattan  359299.096471  6.364715e+08  MULTIPOLYGON (((981219.056 188655.316, 980940....
4         2          Bronx  464392.991824  1.186925e+09  MULTIPOLYGON (((1012821.806 229228.265, 101278...

The recommended method of reading files is geopandas.read_file():

>>> gdf = geopandas.read_file(path)

See also

read_file

read file to GeoDataFame

GeoDataFrame.to_file

write GeoDataFrame to file

classmethod from_features(features, **kwargs)[source]#

Alternate constructor to create GeoDataFrame from an iterable of features or a feature collection.

Parameters:
  • features

    • Iterable of features, where each element must be a feature dictionary or implement the __geo_interface__.

    • Feature collection, where the ‘features’ key contains an iterable of features.

    • Object holding a feature collection that implements the __geo_interface__.

  • crs (str or dict (optional)) – Coordinate reference system to set on the resulting frame.

  • columns (list of column names, optional) – Optionally specify the column names to include in the output frame. This does not overwrite the property names of the input, but can ensure a consistent output format.

Return type:

GeoDataFrame

Notes

For more information about the __geo_interface__, see https://gist.github.com/sgillies/2217756

Examples

>>> feature_coll = {
...     "type": "FeatureCollection",
...     "features": [
...         {
...             "id": "0",
...             "type": "Feature",
...             "properties": {"col1": "name1"},
...             "geometry": {"type": "Point", "coordinates": (1.0, 2.0)},
...             "bbox": (1.0, 2.0, 1.0, 2.0),
...         },
...         {
...             "id": "1",
...             "type": "Feature",
...             "properties": {"col1": "name2"},
...             "geometry": {"type": "Point", "coordinates": (2.0, 1.0)},
...             "bbox": (2.0, 1.0, 2.0, 1.0),
...         },
...     ],
...     "bbox": (1.0, 1.0, 2.0, 2.0),
... }
>>> df = geopandas.GeoDataFrame.from_features(feature_coll)
>>> df
                  geometry   col1
0  POINT (1.00000 2.00000)  name1
1  POINT (2.00000 1.00000)  name2
__finalize__(*args, **kwargs)[source]#

propagate metadata from other to self

astype(*args, **kwargs)[source]#

Cast a pandas object to a specified dtype dtype.

Returns a GeoDataFrame when the geometry column is kept as geometries, otherwise returns a pandas DataFrame.

See the pandas.DataFrame.astype docstring for more details.

Return type:

GeoDataFrame or DataFrame

merge(*args, **kwargs)[source]#

Merge two GeoDataFrame objects with a database-style join.

Returns a GeoDataFrame if a geometry column is present; otherwise, returns a pandas DataFrame.

GeoDataFrame or DataFrame

The extra arguments *args and keyword arguments **kwargs are passed to DataFrame.merge. See https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas

.DataFrame.merge.html

for more details.

dissolve(*args, **kwargs)[source]#

Dissolve geometries within groupby into single observation. This is accomplished by applying the unary_union method to all geometries within a groupself.

Observations associated with each groupby group will be aggregated using the aggfunc.

Parameters:
  • by (str or list-like, default None) – Column(s) whose values define the groups to be dissolved. If None, the entire GeoDataFrame is considered as a single group. If a list-like object is provided, the values in the list are treated as categorical labels, and polygons will be combined based on the equality of these categorical labels.

  • aggfunc (function or string, default "first") –

    Aggregation function for manipulation of data associated with each group. Passed to pandas groupby.agg method. Accepted combinations are:

    • function

    • string function name

    • list of functions and/or function names, e.g. [np.sum, ‘mean’]

    • dict of axis labels -> functions, function names or list of such.

  • as_index (boolean, default True) – If true, groupby columns become index of result.

  • level (int or str or sequence of int or sequence of str, default None) –

    If the axis is a MultiIndex (hierarchical), group by a particular level or levels.

    Added in version 0.9.0.

  • sort (bool, default True) –

    Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group.

    Added in version 0.9.0.

  • observed (bool, default False) –

    This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers.

    Added in version 0.9.0.

  • dropna (bool, default True) –

    If True, and if group keys contain NA values, NA values together with row/column will be dropped. If False, NA values will also be treated as the key in groups.

    Added in version 0.9.0.

  • **kwargs

    Keyword arguments to be passed to the pandas DataFrameGroupby.agg method which is used by dissolve. In particular, numeric_only may be supplied, which will be required in pandas 2.0 for certain aggfuncs.

    Added in version 0.13.0.

Return type:

GeoDataFrame

Examples

>>> from shapely.geometry import Point
>>> d = {
...     "col1": ["name1", "name2", "name1"],
...     "geometry": [Point(1, 2), Point(2, 1), Point(0, 1)],
... }
>>> gdf = geopandas.GeoDataFrame(d, crs=4326)
>>> gdf
    col1                 geometry
0  name1  POINT (1.00000 2.00000)
1  name2  POINT (2.00000 1.00000)
2  name1  POINT (0.00000 1.00000)
>>> dissolved = gdf.dissolve('col1')
>>> dissolved  
                                            geometry
col1
name1  MULTIPOINT (0.00000 1.00000, 1.00000 2.00000)
name2                        POINT (2.00000 1.00000)

See also

GeoDataFrame.explode

explode multi-part geometries into single geometries

explode(*args, **kwargs)[source]#

Explode multi-part geometries into multiple single geometries.

Each row containing a multi-part geometry will be split into multiple rows with single geometries, thereby increasing the vertical size of the GeoDataFrame.

Parameters:
  • column (string, default None) – Column to explode. In the case of a geometry column, multi-part geometries are converted to single-part. If None, the active geometry column is used.

  • ignore_index (bool, default False) – If True, the resulting index will be labelled 0, 1, …, n - 1, ignoring index_parts.

  • index_parts (boolean, default True) – If True, the resulting index will be a multi-index (original index with an additional level indicating the multiple geometries: a new zero-based index for each single part geometry per multi-part geometry).

Returns:

Exploded geodataframe with each single geometry as a separate entry in the geodataframe.

Return type:

GeoDataFrame

Examples

>>> from shapely.geometry import MultiPoint
>>> d = {
...     "col1": ["name1", "name2"],
...     "geometry": [
...         MultiPoint([(1, 2), (3, 4)]),
...         MultiPoint([(2, 1), (0, 0)]),
...     ],
... }
>>> gdf = geopandas.GeoDataFrame(d, crs=4326)
>>> gdf
    col1                                       geometry
0  name1  MULTIPOINT (1.00000 2.00000, 3.00000 4.00000)
1  name2  MULTIPOINT (2.00000 1.00000, 0.00000 0.00000)
>>> exploded = gdf.explode(index_parts=True)
>>> exploded
      col1                 geometry
0 0  name1  POINT (1.00000 2.00000)
  1  name1  POINT (3.00000 4.00000)
1 0  name2  POINT (2.00000 1.00000)
  1  name2  POINT (0.00000 0.00000)
>>> exploded = gdf.explode(index_parts=False)
>>> exploded
    col1                 geometry
0  name1  POINT (1.00000 2.00000)
0  name1  POINT (3.00000 4.00000)
1  name2  POINT (2.00000 1.00000)
1  name2  POINT (0.00000 0.00000)
>>> exploded = gdf.explode(ignore_index=True)
>>> exploded
    col1                 geometry
0  name1  POINT (1.00000 2.00000)
1  name1  POINT (3.00000 4.00000)
2  name2  POINT (2.00000 1.00000)
3  name2  POINT (0.00000 0.00000)

See also

GeoDataFrame.dissolve

dissolve geometries into a single observation.

plot(*args, **kwargs)[source]#
reset_filter(inplace=False)[source]#
remove_filtered(inplace=False)[source]#
class ecoscope.base.Relocations(data=None, *args, **kwargs)[source]#

Bases: EcoDataFrame

Relocation is a model for a set of fixes from a given subject. Because fixes are temporal, they can be ordered asc or desc. The additional_data dict can contain info specific to the subject and relocations: name, type, region, sex etc. These values are applicable to all fixes in the relocations array. If they vary, then they should be put into each fix’s additional_data dict.

classmethod from_gdf(gdf, groupby_col=None, time_col='fixtime', uuid_col=None, **kwargs)[source]#
Parameters:
  • gdf (GeoDataFrame) – Observations data

  • groupby_col (str, optional) – Name of gdf column of identities to treat as separate individuals. Usually subject_id. Default is treating the gdf as being of a single individual.

  • time_col (str, optional) – Name of gdf column containing relocation times. Default is ‘fixtime’.

  • uuid_col (str, optional) – Name of gdf column of row identities. Used as index. Default is existing index.

static _apply_speedfilter(df, fix_filter)[source]#
static _apply_distfilter(df, fix_filter)[source]#
apply_reloc_filter(fix_filter=None, inplace=False)[source]#

Apply a given filter by marking the fix junk_status based on the conditions of a filter

distance_from_centroid()#
cluster_radius()#

The cluster radius is the largest distance between a point in the relocationss and the centroid of the relocationss

cluster_std_dev()#

The cluster standard deviation is the standard deviation of the radii from the centroid to each point making up the cluster

threshold_point_count(threshold_dist)[source]#

Counts the number of points in the cluster that are within a threshold distance of the geographic centre

apply_threshold_filter(threshold_dist_meters=float('Inf'))[source]#
class ecoscope.base.Trajectory(data=None, *args, **kwargs)[source]#

Bases: EcoDataFrame

A trajectory represents a time-ordered collection of segments. Currently only straight track segments exist. It is based on an underlying relocs object that is the point representation

classmethod from_relocations(gdf, *args, **kwargs)[source]#

Create Trajectory class from Relocation dataframe. :param gdf: Relocation geodataframe with relevant columns :type gdf: Geodataframe :param args: :param kwargs:

Return type:

Trajectory

get_displacement()[source]#

Get displacement in meters between first and final fixes.

get_tortuosity()[source]#

Get tortuosity for dataframe defined as distance traveled divided by displacement between first and final points.

static _create_multitraj(df)[source]#
static _create_trajsegments(gdf)[source]#
apply_traj_filter(traj_seg_filter, inplace=False)[source]#
get_turn_angle()[source]#
upsample(freq)[source]#

Interpolate to create upsampled Relocations :param freq: Sampling frequency for new Relocations object :type freq: str, pd.Timedelta or pd.DateOffset

Returns:

relocs

Return type:

ecoscope.base.Relocations

to_relocations()[source]#

Converts a Trajectory object to a Relocations object. :rtype: ecoscope.base.Relocations

downsample(freq, tolerance='0S', interpolation=False)[source]#

Function to downsample relocations. :param freq: Downsampling frequency for new Relocations object :type freq: str, pd.Timedelta or pd.DateOffset :param tolerance: Tolerance on the downsampling frequency :type tolerance: str, pd.Timedelta or pd.DateOffset :param interpolation: If true, interpolates locations on the whole trajectory :type interpolation: bool, optional

Return type:

ecoscope.base.Relocations

static _straighttrack_properties(df)[source]#

Private function used by Trajectory class.

Parameters:

df (geopandas.GeoDataFrame)

class ecoscope.base.cachedproperty(func)[source]#

The cachedproperty is used similar to property, except that the wrapped method is only called once. This is commonly used to implement lazy attributes.

func#
__doc__()[source]#
__isabstractmethod__()[source]#
__get__(obj, objtype=None)[source]#
__repr__()[source]#
ecoscope.base.create_meshgrid(aoi, in_crs, out_crs, xlen=1000, ylen=1000, return_intersecting_only=True, align_to_existing=None)[source]#

Create a grid covering aoi.

Parameters:
  • aoi (shapely.geometry.base.BaseGeometry) – The area of interest. Should be in a UTM CRS.

  • in_crs (value) – Coordinate Reference System of input aoi. Can be anything accepted by pyproj.CRS.from_user_input(). Geometry is automatically converted to UTM CRS as an intermediate for computation.

  • out_crs (value) – Coordinate Reference System of output gs. Can be anything accepted by pyproj.CRS.from_user_input(). Geometry is automatically converted to UTM CRS as an intermediate for computation.

  • xlen (int, optional) – The width of a grid cell in meters.

  • ylen (int, optional) – The height of a grid cell in meters.

  • return_intersecting_only (bool, optional) – Whether to return only grid cells intersecting with the aoi.

  • align_to_existing (geopandas.GeoSeries or geopandas.GeoDataFrame, optional) – If provided, attempts to align created grid to start of existing grid. Requires a CRS and valid geometry.

Returns:

gs – Grid of boxes. CRS is converted to out_crs.

Return type:

geopandas.GeoSeries

ecoscope.base.groupby_intervals(df, col, intervals)[source]#
Parameters:
  • df (pd.DataFrame) – Data to group

  • col (str) – Name of column to group on

  • intervals (pd.IntervalIndex) – Intervals to group on

Return type:

pd.core.groupby.DataFrameGroupBy

ecoscope.base.hex_to_rgba(input)[source]#
Parameters:

input (str)

Return type:

tuple

ecoscope.base.color_tuple_to_css(color)[source]#
Parameters:

color (Tuple[int, int, int, int])