zensvi

This package is a one-stop solution for downloading, cleaning, analyzing street view imagery

https://github.com/koito19960406/zensvi

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.6%) to scientific vocabulary

Keywords

street-view-imagery
Last synced: 6 months ago · JSON representation

Repository

This package is a one-stop solution for downloading, cleaning, analyzing street view imagery

Basic Info
Statistics
  • Stars: 121
  • Watchers: 3
  • Forks: 18
  • Open Issues: 6
  • Releases: 16
Topics
street-view-imagery
Created almost 3 years ago · Last pushed 7 months ago
Metadata Files
Readme Changelog Contributing License Citation

README.md

PyPi version Python versions License Downloads Downloads Downloads Documentation Status codecov

ZenSVI logo

ZenSVI

Primary Author: Koichi Ito (National University of Singapore)

Besides this documentation, we have published a comprehensive paper with detailed information and demonstration use cases. The paper provides in-depth insights into the package's architecture, features, and real-world applications.

ZenSVI is a comprehensive Python package for downloading, cleaning, and analyzing street view imagery. For more information about the package or to discuss potential collaborations, please visit my website at koichiito.com. The source code is available on GitHub.

This package is a one-stop solution for downloading, cleaning, and analyzing street view imagery, with comprehensive API documentation available at zensvi.readthedocs.io.

Table of Contents

Installation of zensvi

bash $ pip install zensvi

Installation of pytorch and torchvision

Since zensvi uses pytorch and torchvision, you may need to install them separately. Please refer to the official website for installation instructions.

Usage

Downloading Street View Imagery

Mapillary

For downloading images from Mapillary, utilize the MLYDownloader. Ensure you have a Mapillary client ID:

```python from zensvi.download import MLYDownloader

mlyapikey = "YOUROWNMLYAPIKEY" # Please register your own Mapillary API key downloader = MLYDownloader(mlyapikey=mlyapikey)

with lat and lon:

downloader.downloadsvi("path/to/outputdirectory", lat=1.290270, lon=103.851959)

with a csv file with lat and lon:

downloader.downloadsvi("path/to/outputdirectory", inputcsvfile="path/to/csv_file.csv")

with a shapefile:

downloader.downloadsvi("path/to/outputdirectory", inputshpfile="path/to/shapefile.shp")

with a place name that works on OpenStreetMap:

downloader.downloadsvi("path/to/outputdirectory", inputplacename="Singapore") ```

KartaView

For downloading images from KartaView, utilize the KVDownloader:

```python from zensvi.download import KVDownloader

downloader = KVDownloader()

with lat and lon:

downloader.downloadsvi("path/to/outputdirectory", lat=1.290270, lon=103.851959)

with a csv file with lat and lon:

downloader.downloadsvi("path/to/outputdirectory", inputcsvfile="path/to/csv_file.csv")

with a shapefile:

downloader.downloadsvi("path/to/outputdirectory", inputshpfile="path/to/shapefile.shp")

with a place name that works on OpenStreetMap:

downloader.downloadsvi("path/to/outputdirectory", inputplacename="Singapore") ```

Amsterdam

For downloading images from Amsterdam, utilize the AMSDownloader:

```python from zensvi.download import AMSDownloader

downloader = AMSDownloader()

with lat and lon:

downloader.downloadsvi("path/to/outputdirectory", lat=4.899431, lon=52.379189)

with a csv file with lat and lon:

downloader.downloadsvi("path/to/outputdirectory", inputcsvfile="path/to/csv_file.csv")

with a shapefile:

downloader.downloadsvi("path/to/outputdirectory", inputshpfile="path/to/shapefile.shp")

with a place name that works on OpenStreetMap:

downloader.downloadsvi("path/to/outputdirectory", inputplacename="Amsterdam") ```

Global Streetscapes

For downloading the NUS Global Streetscapes dataset, utilize the GSDownloader:

```python from zensvi.download import GSDownloader

downloader = GSDownloader()

Download all data

downloader.downloadalldata(local_dir="data/")

Or download specific subsets

downloader.downloadmanuallabels(localdir="manuallabels/") downloader.downloadtrain(localdir="manuallabels/train/") downloader.downloadtest(localdir="manuallabels/test/") downloader.downloadimgtar(localdir="manuallabels/img/") ```

Analyzing Metadata of Mapillary Images

To analyze the metadata of Mapillary images, use the MLYMetadata:

```python from zensvi.metadata import MLYMetadata

pathinput = "path/to/input" mlymetadata = MLYMetadata(pathinput) mlymetadata.computemetadata( unit="image", # unit of the metadata. Other options are "street" and "grid" indicatorlist="all", # list of indicators to compute. You can specify a list of indicators in space-separated format, e.g., "year month day" or "all" to compute all indicators path_output="path/to/output" # path to the output file ) ```

Running Segmentation

To perform image segmentation, use the Segmenter:

```python from zensvi.cv import Segmenter

segmenter = Segmenter(dataset="cityscapes", # or "mapillary" task="semantic" # or "panoptic" ) segmenter.segment("path/to/inputdirectory", dirimageoutput = "path/to/imageoutputdirectory", dirsummaryoutput = "path/to/segmentationsummary_output" ) ```

Running Places365

To perform scene classification, use the ClassifierPlaces365:

```python from zensvi.cv import ClassifierPlaces365

initialize the classifier

classifier = ClassifierPlaces365( device="cpu", # device to use (either "cpu", "cuda", or "mps) )

set arguments

classifier = ClassifierPlaces365() classifier.classify( "path/to/inputdirectory", dirimageoutput="path/to/imageoutputdirectory", dirsummaryoutput="path/to/classificationsummary_output" ) ```

Running PlacePulse 2.0 Prediction

To predict the PlacePulse 2.0 score, use the ClassifierPerception:

```python from zensvi.cv import ClassifierPerception

classifier = ClassifierPerception( perceptionstudy="safer", # Other options are "livelier", "wealthier", "more beautiful", "more boring", "more depressing" ) dirinput = "path/to/input" dirsummaryoutput = "path/to/summaryoutput" classifier.classify( dirinput, dirsummaryoutput=dirsummaryoutput ) ```

You can also use the ViT version for perception classification:

```python from zensvi.cv import ClassifierPerceptionViT

classifier = ClassifierPerceptionViT( perceptionstudy="safer", # Other options are "livelier", "wealthier", "more beautiful", "more boring", "more depressing" ) dirinput = "path/to/input" dirsummaryoutput = "path/to/summaryoutput" classifier.classify( dirinput, dirsummaryoutput=dirsummaryoutput ) ```

Running Global Streetscapes Prediction

To predict the Global Streetscapes indicators, use: - ClassifierGlare: Whether the image contains glare - ClassifierLighting: The lighting condition of the image - ClassifierPanorama: Whether the image is a panorama - ClassifierPlatform: Platform of the image - ClassifierQuality: Quality of the image - ClassifierReflection: Whether the image contains reflection - ClassifierViewDirection: View direction of the image - ClassifierWeather: Weather condition of the image

```python from zensvi.cv import ClassifierGlare

classifier = ClassifierGlare() dirinput = "path/to/input" dirsummaryoutput = "path/to/summaryoutput" classifier.classify( dirinput, dirsummaryoutput=dirsummary_output, ) ```

Running Grounding Object Detection

To run grounding object detection on the images, use the ObjectDetector:

```python from zensvi.cv import ObjectDetector

detector = ObjectDetector( textprompt="tree", # specify the object(s) (e.g., single type: "building", multi-type: "car . tree") boxthreshold=0.35, # confidence threshold for box detection text_threshold=0.25 # confidence threshold for text )

detector.detectobjects( dirinput="path/to/imageinputdirectory", dirimageoutput="path/to/imageoutputdirectory", dirsummaryoutput="path/to/detectionsummaryoutput", save_format="json" # or "csv" ) ```

Running Depth Estimation

To estimate the depth of the images, use the DepthEstimator:

```python from zensvi.cv import DepthEstimator

depthestimator = DepthEstimator( device="cpu", # device to use (either "cpu", "cuda", or "mps") task="relative", # task to perform (either "relative" or "absolute") encoder="vitl", # encoder variant ("vits", "vitb", "vitl", "vitg") maxdepth=80.0 # maximum depth for absolute estimation (only used when task="absolute") )

dirinput = "path/to/input" dirimageoutput = "path/to/imageoutput" # estimated depth map depthestimator.estimatedepth( dirinput, dirimage_output ) ```

Running Embeddings

To generate embeddings and search for similar images, use the Embeddings: ```python from zensvi.cv import Embeddings

emb = Embeddings(modelname="resnet-1", cuda=True) emb.generateembedding( "path/to/imagedirectory", "path/to/outputdirectory", batchsize=1000, ) results = emb.searchsimilarimages("path/to/targetimagefile", "path/to/embeddingsdirectory", 20) ```

Running Low-Level Feature Extraction

To extract low-level features, use the get_low_level_features:

```python from zensvi.cv import getlowlevel_features

getlowlevelfeatures( "path/to/inputdirectory", dirimageoutput="path/to/imageoutputdirectory", dirsummaryoutput="path/to/lowlevelfeaturesummaryoutput" ) ```

Transforming Images

Transform images from panoramic to perspective or fisheye views using the ImageTransformer:

```python from zensvi.transform import ImageTransformer

dirinput = "path/to/input" diroutput = "path/to/output" imagetransformer = ImageTransformer( dirinput="path/to/input", diroutput="path/to/output" ) imagetransformer.transformimages( stylelist="perspective equidistantfisheye orthographicfisheye stereographicfisheye equisolidfisheye", # list of projection styles in the form of a string separated by a space FOV=90, # field of view theta=120, # angle of view (horizontal) phi=0, # angle of view (vertical) aspects=(9, 16), # aspect ratio showsize=100, # size of the image to show (i.e. scale factor) useupper_half=True, # use the upper half of the image for sky view factor calculation ) ```

Creating Point Clouds from Images

To create a point cloud from images with depth information, use the PointCloudProcessor:

```python from zensvi.transform import PointCloudProcessor import pandas as pd

processor = PointCloudProcessor( imagefolder="path/to/imagedirectory", depthfolder="path/to/depthmapsdirectory", outputcoordinatescale=45, # scaling factor for output coordinates depthmax=255 # maximum depth value for normalization )

Create a DataFrame with image information

The DataFrame should have columns similar to this structure:

data = pd.DataFrame({ "id": ["Y2y7An1aRCeA5Y4nW7ITrg", "VSsVjWlr4orKerabFRy-dQ"], # image identifiers "heading": [3.627108491916069, 5.209303414492613], # heading in radians "lat": [40.77363963371641, 40.7757528007], # latitude "lon": [-73.95482278589579, -73.95668603003708], # longitude "xproj": [4979010.676803163, 4979321.30902424], # projected x coordinate "yproj": [-8232613.214232705, -8232820.629621736] # projected y coordinate })

Process images and save point clouds

processor.processmultipleimages( data=data, outputdir="path/to/outputdirectory", save_format="ply" # output format, can be "pcd", "ply", "npz", or "csv" ) ```

Visualizing Results

To visualize the results, use the plot_map, plot_image, plot_hist, and plot_kde functions:

```python from zensvi.visualization import plotmap, plotimage, plothist, plotkde

Plotting a map

plotmap( pathpid="path/to/pidfile.csv", # path to the file containing latitudes and longitudes variablename="vegetation", plot_type="point" # this can be either "point", "line", or "hexagon" )

Plotting images in a grid

plotimage( dirimageinput="path/to/imagedirectory", nrow=4, # number of rows ncol=5 # number of columns )

Plotting a histogram

plothist( dirinput="path/to/data.csv", columns=["vegetation"], # list of column names to plot histograms for title="Vegetation Distribution by Neighborhood" )

Plotting a kernel density estimate

plotkde( dirinput="path/to/data.csv", columns=["vegetation"], # list of column names to plot KDEs for title="Vegetation Density by Neighborhood" ) ```

Contributing

Interested in contributing? Check out the contributing guidelines. Please note that this project is released with a Code of Conduct. By contributing to this project, you agree to abide by its terms.

License

zensvi was created by Koichi Ito. It is licensed under the terms of the MIT License.

Please cite the following paper if you use zensvi in a scientific publication:

bibtex @article{2025_ceus_zensvi, author = {Ito, Koichi and Zhu, Yihan and Abdelrahman, Mahmoud and Liang, Xiucheng and Fan, Zicheng and Hou, Yujun and Zhao, Tianhong and Ma, Rui and Fujiwara, Kunihiko and Ouyang, Jiani and Quintana, Matias and Biljecki, Filip}, doi = {10.1016/j.compenvurbsys.2025.102283}, journal = {Computers, Environment and Urban Systems}, pages = {102283}, title = {ZenSVI: An open-source software for the integrated acquisition, processing and analysis of street view imagery towards scalable urban science}, volume = {119}, year = {2025} }

Credits

- All the packages used in this package: requirements.txt



Logo

Owner

  • Name: Koichi Ito
  • Login: koito19960406
  • Kind: user
  • Location: Singapore

An urban data scientist

GitHub Events

Total
  • Create event: 20
  • Release event: 3
  • Issues event: 28
  • Watch event: 87
  • Delete event: 18
  • Issue comment event: 27
  • Push event: 128
  • Pull request review comment event: 1
  • Pull request review event: 11
  • Pull request event: 56
  • Fork event: 17
Last Year
  • Create event: 20
  • Release event: 3
  • Issues event: 28
  • Watch event: 87
  • Delete event: 18
  • Issue comment event: 27
  • Push event: 128
  • Pull request review comment event: 1
  • Pull request review event: 11
  • Pull request event: 56
  • Fork event: 17

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 11
  • Total pull requests: 19
  • Average time to close issues: 9 months
  • Average time to close pull requests: 1 day
  • Total issue authors: 6
  • Total pull request authors: 5
  • Average comments per issue: 1.09
  • Average comments per pull request: 0.32
  • Merged pull requests: 14
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 8
  • Pull requests: 19
  • Average time to close issues: about 2 months
  • Average time to close pull requests: 1 day
  • Issue authors: 5
  • Pull request authors: 5
  • Average comments per issue: 0.75
  • Average comments per pull request: 0.32
  • Merged pull requests: 14
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • koito19960406 (18)
  • kunifujiwara (3)
  • seshing (2)
  • frank984 (1)
  • Peter9192 (1)
  • ZhuYihan-UMI (1)
  • MahmoudAbdelRahman (1)
  • Reubengyl (1)
  • fzc961020 (1)
  • etoileboots (1)
  • bobleegogogo (1)
  • matqr (1)
Pull Request Authors
  • koito19960406 (43)
  • matqr (6)
  • Junguin (3)
  • fzc961020 (3)
  • ruirzma (2)
  • seshing (2)
  • MahmoudAbdelRahman (2)
  • zichengfan (1)
  • ZhuYihan-UMI (1)
Top Labels
Issue Labels
enhancement (12) bug (7) documentation (2) good first issue (2) invalid (1)
Pull Request Labels
enhancement (3) documentation (1)