New proDataMarket paper: Combining Sentinel-2 and LiDAR data for objective and automated identification of agricultural parcel features

Combining Sentinel-2 and LiDAR data for objective and automated identification of agricultural parcel features by Jesús Estrada, Héctor Sanchez, Lorena Hernanz, María José Checa and Dumitru Roman

This new proDataMarket paper explains how a comprehensive strategy combining remote sensing and field data can be helpful for more effective agriculture management. Satellite data are suitable for monitoring large areas over time, while LiDAR provides specific and accurate data on height and relief. Both types of data can be used for calibration and validation purposes, avoiding field visits and saving useful resources. In this paper we propose a process for objective and automated identification of agricultural parcel features based on processing and combining Sentinel2 data (to sense different types of irrigation patterns) and LiDAR data (to detect landscape elements). The proposed process was validated in several use cases in Spain, yielding high accuracy rates in the identification of parcel features. An important application example of the work reported in this paper is the European Union (EU) Common Agriculture Policy (CAP) funds assignment service, which would significantly benefit from a more objective and automated process for identification of agricultural parcel features, thereby enabling the possibility for the EU to save significant amounts of money yearly.

Although some issues regarding the generation and improvement of agricultural property datasets were already explained in our previous blog entry (Data workflow in CAPAS), this paper highlights the current results of generation and usage of this new information.

Irrigation patterns map, obtained using Sentinel-2 Process 

The main result of this analysis is how the use of the external, and usually underused, data sources offers a powerful and accurate tool for generating new contrast and validation data for the information used by Spanish CAP Payment Agency, in order to provide a better service to landowners and farmers. As a conclusion, the use of Sentinel-2 series and LiDAR can help to detect areas that are not eligible for grant assignment, support cross-check, and these datasets can be used as a tool for choosing field samples.

The document is available Here.

Data Workflow in CAPAS

Description of the data workflow processes

TRAGSA, as a business case provider in the project, is developing the CAPAS service which aims at publishing  and integrating multi-sectorial data from several sources into an existing data-intensive service, targeting better Common Agriculture Policy (CAP) funds assignments to farmers and land owners. The goal is to leverage the data integration facilities offered by proDataMarket, to better define the funds assignments features in parcels and subplots.

CAPAS is working on an improvement of the efficiency and competitiveness of the existing Spanish CAP (Common Agriculture Policy) service by integrating more datasets, underused at the beginning of the proDataMarket project. To use them as a powerful tool, it was necessary to create and develop new data processing algorithms. Therefore, CAPAS is not only an end-user application. Indeed, it involves data collection, data modelling and data processing techniques.

The CAPAS Business Case is oriented towards the replacement of human-generated  (subjective) data with more objective data that can be collected and integrated from different cross-sectorial sources in an automated way.

At least two external datasets (LIDAR and Copernicus SENTINEL2) are being used to improve the agricultural cadastre Spanish database. The economic value generated by this process and its relation to CAP funds assignment will be evaluated during the next year, in the final phase of the project.

Managing LIDAR data

LIDAR files are a collection of points stored as x, y, z which represent longitude, latitude, and elevation, respectively. This data is hard to process for non-specialists. To use them as a powerful tool to define objectively the parameters of agricultural use of parcels and the presence of landscape elements, a new data processing and treatment algorithm has been created.

This algorithm classifies and groups the cloud of points in order to simplify the huge amount of data. The clouds of points are topologically processed to obtain connected areas as polygons or to maintain them as single points. In conclusion, LIDAR datasets are transformed into new raster and vector files, more popular data types, and easier to be dealt with. The overlaps and intersections of the new datasets produced (as Landscape elements) will define the CAP parameters for a specific subplot or parcel.

Managing Satellite data

The Sentinels are a fleet of satellites designed specifically to deliver the wealth of data and imagery that are fundamental to the European Commission’s Copernicus program. The use of satellite images in CAPAS has already been explained in this blog entry.

Description of the source datasets and result dataset

The main source datasets of Business Case CAPAS and main processes used to obtain output datasets are explained below:

LIDAR files

LIDAR files can be available under two different formats: .las and .laz. The LAS file format is a public file format commonly used to exchange 3-dimensional point cloud data between data users, being LAS just an abbreviation of LASER. LAZ files, due to the big size of LAS files, is the zipped version of the LAS format.

Although developed primarily for exchange of LIDAR point cloud data, LAS format supports the exchange of any 3-dimensional x,y,z tuples. This format maintains information specific to the LIDAR nature of the data while not being overly complex.

Technical description of LIDAR format
Technical description of LIDAR format

In the context of the ProDataMarket Project, LAS files used in the CAPAS business case will just be a collection of points (latitude, longitude, elevation).

Spanish LIDAR information is freely and openly available at http://centrodedescargas.cnig.es/CentroDescargas/buscadorCatalogo.do?codFamilia=LIDAR

SENTINEL files

The information to be used in CAPAS business case is the Image Data (JPEG2000) provided by Copernicus at Sentinels Scientific Data Hub (https://scihub.copernicus.eu/). The description of JPEG2000[1] format is beyond the aim of this blog entry but some general ideas will be described.

Sentinel data are freely and openly available at:

https://sentinel.esa.int/web/sentinel/sentinel-data-access/access-to-sentinel-data

More information and general factsheet at: https://earth.esa.int/documents/247904/1848117/Sentinel-2_Data_Products_and_Access.

SIGPAC Database

SigPAC database is a complex information system that covers the whole Spanish geography and all agricultural activities and others related to Biodiversity and nature conservation.

In regards to SigPAC database, the main datasets produced or modified by CAPAS are:

  • Landscape Elements
  • Parcels and Subplots

The level of accessibility of SigPAC database varies depending on Autonomous Communities. For example, it is open and freely available in Castile at http://www.datosabiertos.jcyl.es/web/jcyl/set/es/cartografia/SIGPAC/1284225645888

Data workflow process for CAPAS

The following data workflow, as shown in the diagram below, illustrates the evolution of the different datasets, their transformations and their integration to generate the final result datasets.

CAPAS Workflow
CAPAS Workflow


LIDAR processing

The Grouping process gathers the LIDAR points using the following rules:

  • Errors, noise and overlaps are not taken into account (Classifications 1, 4, 7 and 12). As a consequence, more than 50% of points are removed from the process.
  • Soil, water and buildings have their own groups
  • Classification 19 is considered as short trees
  • Classification 20 is considered as medium trees
  • Classification 21 are 22 are grouped as tall trees

The result of this process is still a LAS file. The following image shows how LIDAR points (green points) have been processed and classified (Green points as trees, red points as soil, orange and yellow as bushes).

lidar-1

The next steps, such as Rasterization or Vectorization, involve topological rules in order to group the points to generate squares (raster) that would be processed to obtain the final vector shapefile.

The following image shows how LIDAR points have been grouped to create topologically connected surfaces. In the image below, yellow areas are Soil, orange are Bushes, green are Trees. Grey areas and blue surfaces (not present in this image) are Buildings and Water, respectively.

lidar-2

Once the trees class is defined in a raster format by LiDAR data, it wasrefined thanks to Sentinel Data which has more updated information. RGB and NDVI products help to identify which pixels have an NDVI value over 0.5 and it could be detected by RGB product in order to check which pixels represent vegetation areas.

Finally, trees auxiliary layer refined by Sentinel is processed to obtain different configurations:

  • Isolated trees
  • Copses

The final result of the process is a vector ESRI shape file, where the copses layer is a polygon feature type and the isolated trees layer is as point feature type. All of them have a direct correspondence with the landscape elements.

The overlaps between detected landscape elements, currently protected sites of Natura 2000 network and the Land Parcel Identification System allows performing an accurate ecological value report for Spanish crops areas.

LiDAR algorithm allows to obtain more detailed information because the landscape value helps to identify which subplot has more value per parcel, obtaining the following benefits:

  • Farmers will get an economical profit through fund-assignments to maintain these trees forms, and
  • the ecosystem and its species will be preserved.

ecological-value

This Ecological value report has been developed regarding the following queries:

  • Query 1: Surface of Sites of Community Importance (LIC) / subplot area.

Score between 0 and 1.

  • Query 2: Surface of Special Protected Areas for Birds (ZEPA) / subplot area.

Score between 0 and 1.

  • Query 3: Protected Sites Value = Sum of query 1 + query 2. Score between 0 and 2.
  • Query 4: Number of Isolated tree / subplot area. Score between 0 and 1.
  • Query 5: Surface of copses area / subplot area. Score between 0 and 1.
  • Query 6: Landscape Elements Value = Sum of query 1 + query 2. Score between 0 and 2.
  • Query 7: Ecological Value = Sum of query 3 + Query 6.

Sentinel Products generation

In the first place, Sentinel 2 (S2) imagery has to be downloaded from the ESA server. In the automatic download process developed, selection parameters were incorporated in order to download only the imagery that satisfies our quality criteria. Two kinds of products are generated from S2 imagery.

  • Simple products: Those which have been generated with one-date imagery. By an automatic process, TRAGSA is generating RGB products for supporting photo interpretation. Another simple product generated is the Normalized Difference Vegetation Index (NDVI) which is widely used for vegetation monitoring.
  • Complex products: Those which are generated with imagery from different dates. The following four thematic layers are going to be created.
    • Permanent grassland: This layer will be useful to determine photosynthetically active vegetation and non active (unproductive or bare soil) areas. Therefore it will help to monitor the maintaining of existing permanent grassland, which is an agricultural beneficial practice for the climate and the environment (REGULATION (EU) No 1307/2013).
    • Herbaceous and woody crops: By using decision algorithms, different crops can be identified. The results will be displayed in two different layers, one for herbaceous crops and other for woody crops.
    • Change detection layer: This layer will highlight areas where changes have happened. The layer will be focused on forests and grassland areas in order to detect dramatic changes, such as those caused by logging or forest fires, as well as to detect more subtle changes associated with AIS (Alien Invasive Species), diseases and reforestation.

Hitherto, only one of the twin S2 satellites (Sentinel 2A) has been launched. When the second satellite (Sentinel 2B) is on orbit, the revisit time at the equator will be 5 days which results in 2-3 days at mid latitude. This high revisit time will offer a quicker updating of SigPAC database in comparison with current updates that are based on low precision data (LANDSAT and SPOT5 satellites) or ortophoto flights generated by each Autonomous Community.

Final Result

As stated previously, Common Agriculture Policy funds Assignments Service (CAPAS) is a set of tools that improves the existing Common Agriculture Policy service (CAP), in order to innovatively manage and upgrade the CAP database provided by Spanish Administration to farmers and land owners. It is important to note that this CAP database is one of the main pillars of the CAP funds calculation systems. As mentioned earlier, the improvement process is based on the leverage of new cross-sectorial data sources from different fields and geographical areas, and the result datasets will be also available at the proDataMarket marketplace.

To use these new datasets as a powerful tool to define objectively the parameters of agricultural use of parcels, presence of landscape elements or temporal evolution of crops, the explained data processing and treatment algorithms have been, at the moment, partially developed.

As a summary, the usage of LIDAR files modifies some Parcel and Subplots features, and SENTINEL images will improve the definition of Parcel and Subplots land use and its temporal evolution.

The new datasets produced by CAPAS using those external sources will be RDFized and incorporated to proDataMarket platform. Therefore, Spanish rural property data, improved using new and underexploited datasets, will be accessible through proDataMarket platform providing the users with advanced visualization and querying features.

[1] JPEG 2000 (JP2) is an image compression standard and coding system. It was created by the Joint Photographic Experts Group committee in 2000

Data Workflow in SoE

The datasets and challenges in integration

The State of Estate (SoE) business case focuses on generating an up-to-date, dynamic and high quality report on State-owned properties and buildings in Norway. It collects and integrates several datasets as listed below. The datasets are originated from heterogeneous sources and of different quality. Here are some scenarios that will cause challenges in the integration process.

Matrikkel data

Though Matrikkel data from the Norwegian mapping authority is one the most authoritative source of property data, not all the information is up to date. It could be sometimes caused by the delay of administrative procedure in municipalities, and sometimes the owners don’t report change to the municipalities because of the high cost to report the change, and sometimes it could be typos and some other manual updating errors. The buildings less than 15 square meters are not required to be registered in the Matrikkel.

Statsbygg’s property data

The Statsbygg’s property data is updated since the last report. However, the Matrikkel’s building number is not correctly registered on all the buildings. The address information is not necessarily updated either. It could be also be typos and some other manual updating errors in the dataset.

Business Entity register

The Business Entity register dataset is from another national authoritative source with information of ministries and their subordinate organizations. However, not all the subordinate organizations of the ministries are registered as a sub-organization in the Business Entity register. The missing organizations need to be added manually as extra business entities to the dataset.

State-owned properties Report 2013-2014 (SoEReport2013)

The SoEReport2013 is a report from 2013 and it includes some properties or buildings that could be sold, rebuilt, demolished in the last few years. The old report also includes some non-reported ownership of properties and buildings in the government that we need to take care of in the new report. For example several properties were registered as owned by Statsbygg in the old report; however, they are registered as owned by the King in the Matrikkel database, which means that Statsbygg has taken care of the King’s property without reporting to the municipalities that ownership has changed.

ByggForAlle

The Matrikkel’s building number has not been registered on all the buildings in the ByggForAlle dataset and some of the key information could include typos, manual updating errors or be out-of-date too.

The data workflow

To meet the challenges in the data integration, we’ve developed a data workflow as shown in the diagram below. It illustrates the process of importing the datasets, quality control and integration of the datasets, and finally generating the result dataset. The involved roles and their activities are modelled as swimming lanes. The original and generated datasets are modelled as dataobjects in the diagram such as SoEReport2013, BusinessEntityRegister, NewOrgList_Comfirmed etc. The quality control process can be both machine automated and manual work based on human tasks and it will take care of the integration exceptions.

dataworkflowsoefigure1

There are 3 roles involved in this process.

  • The SystemAdmin is a technical role and its main tasks are dataset import and integration.
  • The SystemManager is a functional role that has the main task of quality control and generating the SoE report including organizing and communication tasks with other involved organizations.
  • The PropertyResponsible is a role for each involved organization and its main task is to prepare data, quality control and submit its own property-list and building-list.

The activity boxes are explained as below:

  • ImportOldReportWithOrgList: SystemAdmin starts with checking if the SoE report from 2013 is imported. If not, the SystemAdmin imports the report which also includes the old organization list.
  • ImportMinistrySub_Brreg: Then the SystemAdmin imports the organization list of the Ministries and subordinate organizations from the Business Entity Register.
  • MergeOrgListBrreg_SoEReport2013: The two organization lists are merged.
  • EditComfirmOrgList: The SystemManager will get signal to start editing and updating the list, the result will be the confirmed OrgList.
  • ImportOwnedPropertyBuildingFromMatrikkelBasedOnOrglist_Comfirmed: Based on the confirmed OrgList, the owned properties and buildings from the Cadastre database (Matrikkel) are imported by the SystemAdmin.
  • PrepareExportForOwned: The property responsible will prepare a property list in a format as agreed.
  • ImportOwnedFromOrg: If some of the organizations such as Statsbygg have their own database or list of owned properties and buildings the lists will be imported as necessary.
  • ImportByggForAlleData: Then the ByggForAlle data is imported.
  • MergeAllDatasets: Afterwards data from Matrikkel and Business Entity Register (OrgList_comfirmed), the SoE reports 2013, properties data from organizations such as Statsbygg, ByggForAlle are merged by the SystemAdmin.
  • QualityControlMergedList: The SystemManager will then start the quality control cycle of the merged list.
  • EditAndConfirmOwnedList: The property responsible in each organization will get the task to edit and confirm their property and building list.
  • ApproveAndFinalizeNewSoEReport: The SystemManager will do the final quality control before approving and finalizing the new SoE Report.

 

Expected results and an example

Here below is one of the expected result from data quality control and integration in the step of “MergeAllDatasets”. The maps below shows both the examples of properties on the SoEReport2013 but not on the list based on Matrikkel_Brreg integration, and the properties on the Matrikkel_brreg integration but not on the SoEReport2013. After identifying the mismatches in this way, the users can work further on to clean the datasets to correct the wrong registrations in the source systems.

Symbol BRREG_Matrikkel integrated dataset Old SoE Report Example
Simple hatch No Yes “, NORSK INST.FOR SKOG OG LANDSKAP, NORSK INSTITUTT FOR SKOG OG LANDSKAP”

“,BIOFORSK, TOLLEFSRUD MARI METTE”

Cross hatch Yes No “STATENS VEGVESEN, ,STATENS VEGVESEN”
land parcels filled with solid color Yes Yes “MATTILSYNET,MATTILSYNET,MATTILSYNET”

 

The figure below shows that inside the Campus Ås. Some land parcels owned/leased by NMBU and Statens vegvesen according to Matrikkel are not included in the old SoE report, those land parcels are marked with crosshatch pattern. On the other side, some land parcels from the old SoE report are not included in the list based on BRREG and Matrikkel, such as the hatched land parcel with the label “, NORSK INST.FOR SKOG OG LANDSKAP, NORSK INSTITUTT FOR SKOG OG LANDSKAP” or “,BIOFORSK, TOLLEFSRUD MARI METTE”. Both of the simple hatch and cross hatch properties in the map need to be quality check and confirmed by the step of “QualityControlMergedList” and thereafter “EditAndConfirmOwnedList”.

dataworkflowsoefigure2