Skip to main content

GeoServer REST Configuration

Project description

gsconfig

https://travis-ci.org/boundlessgeo/gsconfig.svg?branch=master

gsconfig is a python library for manipulating a GeoServer instance via the GeoServer RESTConfig API.

The project is distributed under a MIT License .

Installing

pip install gsconfig

For developers:

git clone git@github.com:boundlessgeo/gsconfig.git
cd gsconfig
python setup.py develop

Getting Help

There is a brief manual at http://boundlessgeo.github.io/gsconfig/ . If you have questions, please ask them on the GeoServer Users mailing list: http://geoserver.org/comm/ .

Please use the Github project at http://github.com/boundlessgeo/gsconfig for any bug reports (and pull requests are welcome, but please include tests where possible.)

Sample Layer Creation Code

from geoserver.catalog import Catalog
cat = Catalog("http://localhost:8080/geoserver/")
topp = cat.get_workspace("topp")
shapefile_plus_sidecars = shapefile_and_friends("states")
# shapefile_and_friends should look on the filesystem to find a shapefile
# and related files based on the base path passed in
#
# shapefile_plus_sidecars == {
#    'shp': 'states.shp',
#    'shx': 'states.shx',
#    'prj': 'states.prj',
#    'dbf': 'states.dbf'
# }

# 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes)
# 'workspace' is optional (GeoServer's default workspace is used by... default)
# 'name' is required
ft = cat.create_featurestore(name, workspace=topp, data=shapefile_plus_sidecars)

Running Tests

Since the entire purpose of this module is to interact with GeoServer, the test suite is mostly composed of integration tests. These tests necessarily rely on a running copy of GeoServer, and expect that this GeoServer instance will be using the default data directory that is included with GeoServer. This data is also included in the GeoServer source repository as /data/release/. In addition, it is expected that there will be a postgres database available at postgres:password@localhost:5432/db. You can test connecting to this database with the psql command line client by running $ psql -d db -Upostgres -h localhost -p 5432 (you will be prompted interactively for the password.)

To override the assumed database connection parameters, the following environment variables are supported:

  • DATABASE

  • DBUSER

  • DBPASS

If present, psycopg will be used to verify the database connection prior to running the tests.

If provided, the following environment variables will be used to reset the data directory:

GEOSERVER_HOME

Location of git repository to read the clean data from. If only this option is provided git clean will be used to reset the data.

GEOSERVER_DATA_DIR

Optional location of the data dir geoserver will be running with. If provided, rsync will be used to reset the data.

GS_VERSION

Optional environment variable allowing the catalog test cases to automatically download and start a vanilla GeoServer WAR form the web. Be sure that there are no running services on HTTP port 8080.

Here are the commands that I use to reset before running the gsconfig tests:

$ cd ~/geoserver/src/web/app/
$ PGUSER=postgres dropdb db
$ PGUSER=postgres createdb db -T template_postgis
$ git clean -dxff -- ../../../data/release/
$ git checkout -f
$ MAVEN_OPTS="-XX:PermSize=128M -Xmx1024M" \
GEOSERVER_DATA_DIR=../../../data/release \
mvn jetty:run

At this point, GeoServer will be running foregrounded, but it will take a few seconds to actually begin listening for http requests. You can stop it with CTRL-C (but don’t do that until you’ve run the tests!) You can run the gsconfig tests with the following command:

$ python setup.py test

Instead of restarting GeoServer after each run to reset the data, the following should allow re-running the tests:

$ git clean -dxff -- ../../../data/release/
$ curl -XPOST --user admin:geoserver http://localhost:8080/geoserver/rest/reload

More Examples - Updated for GeoServer 2.4+

Loading the GeoServer catalog using gsconfig is quite easy. The example below allows you to connect to GeoServer by specifying custom credentials.

from geoserver.catalog import Catalog
cat = Catalog("http://localhost:8080/geoserver/rest/", "admin", "geoserver")

The code below allows you to create a FeatureType from a Shapefile

geosolutions = cat.get_workspace("geosolutions")
import geoserver.util
shapefile_plus_sidecars = geoserver.util.shapefile_and_friends("C:/work/gsconfig/test/data/states")
# shapefile_and_friends should look on the filesystem to find a shapefile
# and related files based on the base path passed in
#
# shapefile_plus_sidecars == {
#    'shp': 'states.shp',
#    'shx': 'states.shx',
#    'prj': 'states.prj',
#    'dbf': 'states.dbf'
# }
# 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes)
# 'workspace' is optional (GeoServer's default workspace is used by... default)
# 'name' is required
ft = cat.create_featurestore("test", shapefile_plus_sidecars, geosolutions)

It is possible to create JDBC Virtual Layers too. The code below allow to create a new SQL View called my_jdbc_vt_test defined by a custom sql.

from geoserver.catalog import Catalog
from geoserver.support import JDBCVirtualTable, JDBCVirtualTableGeometry, JDBCVirtualTableParam

cat = Catalog('http://localhost:8080/geoserver/rest/', 'admin', '****')
store = cat.get_store('postgis-geoserver')
geom = JDBCVirtualTableGeometry('newgeom','LineString','4326')
ft_name = 'my_jdbc_vt_test'
epsg_code = 'EPSG:4326'
sql = 'select ST_MakeLine(wkb_geometry ORDER BY waypoint) As newgeom, assetid, runtime from waypoints group by assetid,runtime'
keyColumn = None
parameters = None

jdbc_vt = JDBCVirtualTable(ft_name, sql, 'false', geom, keyColumn, parameters)
ft = cat.publish_featuretype(ft_name, store, epsg_code, jdbc_virtual_table=jdbc_vt)

This example shows how to easily update a layer property. The same approach may be used with every catalog resource

ne_shaded = cat.get_layer("ne_shaded")
ne_shaded.enabled=True
cat.save(ne_shaded)
cat.reload()

Deleting a store from the catalog requires to purge all the associated layers first. This can be done by doing something like this:

st = cat.get_store("ne_shaded")
cat.delete(ne_shaded)
cat.reload()
cat.delete(st)
cat.reload()

There are some functionalities allowing to manage the ImageMosaic coverages. It is possible to create new ImageMosaics, add granules to them, and also read the coverages metadata, modify the mosaic Dimensions and finally query the mosaic granules and list their properties.

The gsconfig methods map the REST APIs for ImageMosaic

In order to create a new ImageMosaic layer, you can prepare a zip file containing the properties files for the mosaic configuration. Refer to the GeoTools ImageMosaic Plugin guide in order to get details on the mosaic configuration. The package contains an already configured zip file with two granules. You need to update or remove the datastore.properties file before creating the mosaic otherwise you will get an exception.

from geoserver.catalog import Catalog
cat = Catalog("http://localhost:8180/geoserver/rest")
cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip")

By defualt the cat.create_imagemosaic tries to configure the layer too. If you want to create the store only, you can specify the following parameter

cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip", "none")

In order to retrieve from the catalog the ImageMosaic coverage store you can do this

store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test")

It is possible to add more granules to the mosaic at runtime. With the following method you can add granules already present on the machine local path.

cat.harvest_externalgranule("file://D:/Work/apache-tomcat-6.0.16/instances/data/data/MetOc/NOAAWW3/20131001/WIND/NOAAWW3_NCOMultiGrid__WIND_000_20131001T000000.tif", store)

The method below allows to send granules remotely via POST to the ImageMosaic. The granules will be uploaded and stored on the ImageMosaic index folder.

cat.harvest_uploadgranule("NOAAWW3_NCOMultiGrid__WIND_000_20131002T000000.zip", store)

To delete an ImageMosaic store, you can follow the standard approach, by deleting the layers first. ATTENTION: at this time you need to manually cleanup the data dir from the mosaic granules and, in case you used a DB datastore, you must also drop the mosaic tables.

layer = cat.get_layer("NOAAWW3_NCOMultiGrid_WIND_test")
cat.delete(layer)
cat.reload()
cat.delete(store)
cat.reload()

The method below allows you the load and update the coverage metadata of the ImageMosaic. You need to do this for every coverage of the ImageMosaic of course.

coverage = cat.get_resource_by_url("http://localhost:8180/geoserver/rest/workspaces/natocmre/coveragestores/NOAAWW3_NCOMultiGrid_WIND_test/coverages/NOAAWW3_NCOMultiGrid_WIND_test.xml")
coverage.supported_formats = ['GEOTIFF']
cat.save(coverage)

By default the ImageMosaic layer has not the coverage dimensions configured. It is possible using the coverage metadata to update and manage the coverage dimensions. ATTENTION: notice that the presentation parameters accepts only one among the following values {‘LIST’, ‘DISCRETE_INTERVAL’, ‘CONTINUOUS_INTERVAL’}

from geoserver.support import DimensionInfo
timeInfo = DimensionInfo("time", "true", "LIST", None, "ISO8601", None)
coverage.metadata = ({'dirName':'NOAAWW3_NCOMultiGrid_WIND_test_NOAAWW3_NCOMultiGrid_WIND_test', 'time': timeInfo})
cat.save(coverage)

Once the ImageMosaic has been configured, it is possible to read the coverages along with their granule schema and granule info.

from geoserver.catalog import Catalog
cat = Catalog("http://localhost:8180/geoserver/rest")
store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test")
coverages = cat.mosaic_coverages(store)
schema = cat.mosaic_coverage_schema(coverages['coverages']['coverage'][0]['name'], store)
granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store)

The granules details can be easily read by doing something like this:

granules['crs']['properties']['name']
granules['features']
granules['features'][0]['properties']['time']
granules['features'][0]['properties']['location']
granules['features'][0]['properties']['run']

When the mosaic grows up and starts having a huge set of granules, you may need to filter the granules query through a CQL filter on the coverage schema attributes.

granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z'")
granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z' AND run = 0")
granules = cat.list_granules(coverages['coverages']['coverage'][0]['name'], store, "location LIKE '%20131002T000000.tif'")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gsconfig-1.0.10.tar.gz (28.7 kB view details)

Uploaded Source

File details

Details for the file gsconfig-1.0.10.tar.gz.

File metadata

  • Download URL: gsconfig-1.0.10.tar.gz
  • Upload date:
  • Size: 28.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for gsconfig-1.0.10.tar.gz
Algorithm Hash digest
SHA256 aded890cea742095c28103b0c10111e9947a12a57c52bfd0333614d612532ebd
MD5 b95eb7afaf8a6eba28a9952aeb18019e
BLAKE2b-256 a1f22bde043ed1238643e6459674ac75534defecde2ff13b1532eccb732c0bd8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page