Geoserver + Elasticsearch massive spatial data application

Before

Recently, we are studying technology solutions for massive spatial data applications. The technology stack still relies on geoserver, which is derived and expanded on this basis. I wrote a Geoserver+GeoMesa technical blog in the early stage . Those who are interested can pay attention to it. Considering the excellent query performance of es, we hereby study the combination of geoserver and es to end the route. Having said that, bloggers are slightly disappointed with some domestic intellectual rights and the quality of blogs. The awareness of intellectual property rights is very low. The bloggers searched for the geoserver+es solution. The first six search results are exactly the same at a certain time, but they are not the same author. It is not clear whether it is the author’s different id number or the intellectual work has been plagiarized by others (the possibility of plagiarism is very high, The blogger suffered a lot from it before and once complained). The six most important search results are completely copied from the official website of geoserver. It can even be said that the blog was published after a simple translation of Chinese and English, but the official website missed a very important technology. Without this technology, it is impossible to support the combined application of spatial data geoserver and es. It can be seen that the search result author did not verify at all, and began to mislead others. What is more annoying is the blind plagiarism and the thieves who follow suit (it is even more impossible to verify by themselves) , It is completely blasphemy and disrespect for knowledge. The blogger writes a blog here to make a supplement, hoping to help more geospatial development enthusiasts.

Environment setup

  1. jdk 1.8

nodejs (optional)

Use npm to install es-head dependent packages and start.

geoserver 2.19

Geoserver is used and connected to es, spatial data service publishing, and spatial data display.

geoserver es plugin

geoserver es plugin
geoserver es plugin

The official website plug-in lacks a geojson dependency package. If you don’t add it, you will not be able to use the wms map service. Many blog posts copy the official website without verification, which is very criticized.

gt-geojsondatastore-25.0

geojson depends on jar package

elasticsearch 7.12.1

es installation steps can check the deployment by yourself

Kibana 7.12.1

Kibana is used for es visual performance monitoring and management, and can provide spatial data import and display functions

Kibana spatial data upload display effect
Kibana spatial data upload display effect

Logstash (optional)

It can be used to synchronize and import relational databases (Oracle, Postgresql, Mysql, etc.) and text files with es data. For details, please refer to the official website.

elasticsearch-head

Web-side es visualization system, which can be used for data query, similar to Kibana

es-head renderings


 Integrated deployment

  1. Copy the geoserver es plug-in package to the geoserver/lib folder, and start tomcat
geoserver es renderings

The es parameter configuration can be seen on the official website:

https://docs.geoserver.org/latest/en/user/community/elasticsearch/index.html

es spatial data import

The es server opens two types of spatial data, geo-point (for single points) and geo-shape (which can be used for complex spatial vector elements). es supports geojson, geohash, and wkt standard spatial data formats. Unfortunately, wkb format is not supported. Therefore, you must remember the format conversion for data synchronization in relational databases. Currently, only wgs84 latitude and longitude data is supported, and the coordinate system conversion is remembered for data import.

We can use the above kibana tool to import geojson file data, as shown in the figure below, the effect of the default point aggregation algorithm:

Renderings show

View the es-head data list as follows:

es-head data query display

Use es java api data synchronization

The java program updates es synchronously, the partial code is as follows:

/**     * Bean name default  函数名字     *     * @return     */    @Bean(name = "transportClient")    public TransportClient transportClient() {        LOGGER.info("Elasticsearch初始化开始。。。。。");        TransportClient transportClient = null;        try {            // 配置信息            Settings esSetting = Settings.builder()                    .put("cluster.name", clusterName) //集群名字                    .put("client.transport.sniff", true)//增加嗅探机制,找到ES集群                    .put("thread_pool.search.size", Integer.parseInt(poolSize))//增加线程池个数,暂时设为5                    .build();            //配置信息Settings自定义            transportClient = new PreBuiltTransportClient(esSetting);            TransportAddress transportAddress = new TransportAddress(InetAddress.getByName(hostName), Integer.valueOf(port));            transportClient.addTransportAddresses(transportAddress);        } catch (Exception e) {            LOGGER.error("elasticsearch TransportClient create error!!", e);        }        return transportClient;    }

Initialize es client

/**         * 创建索引     *     * @param index     * @return     */    @Override    public boolean createIndex(String index) {        if (!isIndexExist(index)) {            LOGGER.info("Index is not exits!");        }        CreateIndexResponse indexresponse = client.admin().indices().prepareCreate(index).execute().actionGet();        LOGGER.info("执行建立成功?" + indexresponse.isAcknowledged());        return indexresponse.isAcknowledged();    }     /**     * 删除索引     *     * @param index     * @return     */    @Override    public boolean deleteIndex(String index) {        if (!isIndexExist(index)) {            LOGGER.info("Index is not exits!");        }        AcknowledgedResponse dResponse = client.admin().indices().prepareDelete(index).execute().actionGet();        if (dResponse.isAcknowledged()) {            LOGGER.info("delete index " + index + "  successfully!");        } else {            LOGGER.info("Fail to delete index " + index);        }        return dResponse.isAcknowledged();    }     /**     * 判断索引是否存在     *     * @param index     * @return     */    @Override    public boolean isIndexExist(String index) {        IndicesExistsResponse inExistsResponse = client.admin().indices().exists(new IndicesExistsRequest(index)).actionGet();        if (inExistsResponse.isExists()) {            LOGGER.info("Index [" + index + "] is exist!");        } else {            LOGGER.info("Index [" + index + "] is not exist!");        }        return inExistsResponse.isExists();    }     /**     * @Description: 判断inde下指定type是否存在     */    @Override    public boolean isTypeExist(String index, String type) {        return isIndexExist(index)                ? client.admin().indices().prepareTypesExists(index).setTypes(type).execute().actionGet().isExists()                : false;    }     @Override    public boolean createMapping(String index, String type) {        // 创建index        Map<String, Object> settings = new HashMap<>();        settings.put("number_of_shards", 4);    // 分片数量        settings.put("number_of_replicas", 0);    // 复制数量, 导入时最好为0, 之后2-3即可        settings.put("refresh_interval", "10s");// 刷新时间         CreateIndexRequestBuilder prepareCreate = client.admin().indices().prepareCreate(index);        prepareCreate.setSettings(settings);        try {            // 创建mapping            XContentBuilder mapping = XContentFactory.jsonBuilder()                    .startObject()                    .startObject(type)                    .startObject("properties")                    .startObject("osm_id").field("type", "text").endObject()                    .startObject("code").field("type", "long").endObject()                    .startObject("fclass").field("type", "text").endObject()                    .startObject("name").field("type", "text").endObject()                    .startObject("geom").field("type", "geo_point").endObject()                    .endObject()                    .endObject()                    .endObject();            prepareCreate.addMapping(type, mapping);            CreateIndexResponse response = prepareCreate.execute().actionGet();            return response.isAcknowledged();        } catch (IOException e) {            LOGGER.error(e.getMessage());        }        return false;    }

Index function package

@Override    public String addData(JSONObject jsonObject, String index, String type, String id) {        IndexResponse response;        try {            XContentBuilder xContentBuilder = XContentFactory.jsonBuilder()                    .startObject()                    .field("osm_id",jsonObject.getString("osmId"))                    .field("code",jsonObject.getIntValue("code"))                    .field("fclass",jsonObject.getString("fclass"))                    .field("name",jsonObject.getString("name"))                    .startObject("geom").field("lat", jsonObject.getDoubleValue("lat")).field("lon", jsonObject.getDoubleValue("lon")).endObject()                    .endObject();            response = client.prepareIndex(index, type, id).setSource(xContentBuilder).get();            LOGGER.info("addData response status:{},id:{}", response.status().getStatus(), response.getId());            return response.getId();        } catch (IOException e) {            LOGGER.error(e.getMessage());        }         return null;    }     @Override    public String addData(Map<String, ?> source, String index, String type, String id) {        IndexResponse response = client.prepareIndex(index, type, id).setSource(source).get();        LOGGER.info("addData response status:{},id:{}", response.status().getStatus(), response.getId());        return response.getId();    }     @Override    public String addData(JSONObject jsonObject, String index, String type) {        return addData(jsonObject, index, type, UUID.randomUUID().toString().replaceAll("-", "").toUpperCase());    }     /**     * 通过ID删除数据     *     * @param index 索引,类似数据库     * @param type  类型,类似表     * @param id    数据ID     */    @Override    public String deleteDataById(String index, String type, String id) {        DeleteResponse response = client.prepareDelete(index, type, id).execute().actionGet();        LOGGER.info("deleteDataById response status:{},id:{}", response.status().getStatus(), response.getId());        return response.getId();    }     @Override    public String updateDataById(JSONObject jsonObject, String index, String type, String id) {        UpdateRequest updateRequest = new UpdateRequest();         updateRequest.index(index).type(type).id(id).doc(jsonObject);         ActionFuture<UpdateResponse> updateResponseActionFuture =  client.update(updateRequest);        return updateResponseActionFuture.actionGet().getId();    }     @Override    public String updateDataById(Map<String, ?> source, String index, String type, String id) {        return null;    }     /**     * 通过ID获取数据     *     * @param index  索引,类似数据库     * @param type   类型,类似表     * @param id     数据ID     * @param fields 需要显示的字段,逗号分隔(缺省为全部字段)     * @return     */    @Override    public Map<String, Object> searchDataById(String index, String type, String id, String fields) {        GetRequestBuilder getRequestBuilder = client.prepareGet(index, type, id);         if (StringUtils.isNotEmpty(fields)) {            getRequestBuilder.setFetchSource(fields.split(","), null);        }         GetResponse getResponse = getRequestBuilder.execute().actionGet();         return getResponse.getSource();    }

Adding, deleting, modifying and checking es data

geoserver service release

interface
Optional field
WMS renderings
WFS request

Follow-up

The geoserver+es massive data application is temporarily explained here. If you are interested in bloggers, you can follow and comment on the bloggers and discuss them together. Follow-up bloggers intend to track and verify the efficiency of geoserver+es data query and display, and make a comprehensive comparison with the efficiency of geoserver+ relational database data query and display. The knowledge points are a little messy, if there are any mistakes, please correct me. Thank you all for your support.

Java program code address (download the develop branch):

https://gitee.com/yangdengxian/geodatastore/tree/develop/

You can download poi data from the osm website by yourself

http://download.geofabrik.de/asia/