Integrating multisectoral datasets: from satellites to real estate scoring model

During a project meeting in Sofia on September 21, 2016, Cerved teamed up with TRAGSA to brainstorm ideas of re-using the TRAGSA methods for processing satellite imagery to analyse green areas in urbanized cities.

Fundamentals of Tragsa Processing

A common feature in Vegetation Spectra is the high contrast observed between the red band and the Near Infrared (NIR) region. The optical instrument carried by Sentinel 2 satellites samples 13 spectral bands, including high resolution bands in the red (bands 4, 5 & 6) as well as bands in the NIR (8 & 8A). Refer to this blog post for more details about processing Sentinel 2 data.

Using the TRAGSA methodology it is possible to isolate and enhance the vegetation, to locate green areas in urban areas. Green areas are important input to the Cerved’s innovative real estate evaluation model (which is being developed within one of the Cerved’s business cases in the project, as introduced in this blog post). Cerved uses open data, to generate indicators of green areas defined for the model: green area coverage and distance to the wood. Operations that Cerved performs to compute these indicators are similar to those that TRAGSA does on satellite data, such as clustering of green areas into big areas and isolating trees and group of trees. This motivated us to experiment with satellite data and TRAGSA’s methodology, to see whether we could potentially use more complete, structured and up-to-date source of green areas information as input to our real estate evaluation model.


We identified a highly urbanized Italian city but with particular attention to green areas, which is the city of Turin.

The steps that we followed:

  • extraction of city boundaries of Turin in GeoJSON format by SPAZIODATI
  • selections of good quality imagery for Turin from the Sentinel data repository by TRAGSA
  • processing S2 imagery in order to get a vector layer which indicates the presence or absence of a green area in each pixel (1/0) by TRAGSA
  • display of the green areas of the tiles (see the screenshot below) prototype Amerigo visualisation service, under development by SPAZIODATI
  • data processing and aggregation of the tiles into census cells areas, in order to develop green areas indicators for each census cell, by CERVED
  • integration and testing of the score dedicated to green areas within the business model CCRS (Cerved Cadastral Report Service) by CERVED


The result of this experiment was extremely surprising; the detail and accuracy of this new score in identifying the green areas (not only public green areas) is far greater than accuracy of the other scores, developed on public and open green areas of datasets.

New DataGraft-related papers

DataGraft: One-Stop-Shop for Open Data Management by D. Roman, N. Nikolov, A. Putlier, D. Sukhobok, B. Elvesæter, A. Berre, X. Ye, M. Dimitrov, A. Simov, M. Zarev, R. Moynihan, B. Roberts, I. Berlocher, S. Kim, T. Lee, A. Smith, and T. Heath. Semantic Web journal, 2016.

  • Abstract: This paper introduces DataGraft ( – a cloud-based platform for data transformation and publishing. DataGraft was developed to provide better and easier to use tools for data workers and developers (e.g., open data publishers, linked data developers, data scientists) who consider existing approaches to data transformation, hosting, and access too costly and technically complex. DataGraft offers an integrated, flexible, and reliable cloud-based solution for hosted open data management. Key features include flexible management of data transformations (e.g., interactive creation, execution, sharing, and reuse) and reliable data hosting services. This paper provides an overview of DataGraft focusing on the rationale, key features and components, and evaluation.
  • Download paper

DataGraft: Simplifying Open Data Publishing by D. Roman, M. Dimitrov, N. Nikolov, A. Putlier, D. Sukhobok, B. Elvesæter, A..J. Berre, X. Ye, A. Simov and Y. Petkov. ESWC Demo paper. 2016.

  • Abstract: In this demonstrator we introduce DataGraft – a platform for Open Data management. DataGraft provides data transformation, publishing and hosting capabilities that aim to simplify the data publishing lifecycle for data workers (i.e., Open Data publishers, Linked Data developers, data scientists). This demonstrator highlights the key features of DataGraft by exemplifying a data transformation and publishing use case with property-related data.
  • Download paper

Tabular Data Cleaning and Linked Data Generation with Grafterizer by D. Sukhobok, N. Nikolov, A. Pultier, X. Ye, A..J. Berre, R. Moynihan, B. Roberts, B. Elvesæter, N. Mahasivam and D. Roman. ESWC Demo paper. 2016.

  • Abstract: Over the past several years the amount of published open data has increased significantly. The majority of this is tabular data, that requires powerful and flexible approaches for data cleaning and preparation in order to convert it into Linked Data. This paper introduces Grafterizer – a software framework developed to support data workers and data developers in the process of converting raw tabular data into linked data. Its main components include Grafter, a powerful software library and DSL for data cleaning and RDF-ization, and Grafterizer, a user interface for interactive specification of data transformations along with a back-end for management and execution of data transformations. The proposed demonstration will focus on Grafterizer’s powerful features for data cleaning and RDF-ization in a scenario using data about the risk of failure of transport infrastructure components due to natural hazards.
  • Download paper


Data Workflow in CAPAS

Description of the data workflow processes

TRAGSA, as a business case provider in the project, is developing the CAPAS service which aims at publishing  and integrating multi-sectorial data from several sources into an existing data-intensive service, targeting better Common Agriculture Policy (CAP) funds assignments to farmers and land owners. The goal is to leverage the data integration facilities offered by proDataMarket, to better define the funds assignments features in parcels and subplots.

CAPAS is working on an improvement of the efficiency and competitiveness of the existing Spanish CAP (Common Agriculture Policy) service by integrating more datasets, underused at the beginning of the proDataMarket project. To use them as a powerful tool, it was necessary to create and develop new data processing algorithms. Therefore, CAPAS is not only an end-user application. Indeed, it involves data collection, data modelling and data processing techniques.

The CAPAS Business Case is oriented towards the replacement of human-generated  (subjective) data with more objective data that can be collected and integrated from different cross-sectorial sources in an automated way.

At least two external datasets (LIDAR and Copernicus SENTINEL2) are being used to improve the agricultural cadastre Spanish database. The economic value generated by this process and its relation to CAP funds assignment will be evaluated during the next year, in the final phase of the project.

Managing LIDAR data

LIDAR files are a collection of points stored as x, y, z which represent longitude, latitude, and elevation, respectively. This data is hard to process for non-specialists. To use them as a powerful tool to define objectively the parameters of agricultural use of parcels and the presence of landscape elements, a new data processing and treatment algorithm has been created.

This algorithm classifies and groups the cloud of points in order to simplify the huge amount of data. The clouds of points are topologically processed to obtain connected areas as polygons or to maintain them as single points. In conclusion, LIDAR datasets are transformed into new raster and vector files, more popular data types, and easier to be dealt with. The overlaps and intersections of the new datasets produced (as Landscape elements) will define the CAP parameters for a specific subplot or parcel.

Managing Satellite data

The Sentinels are a fleet of satellites designed specifically to deliver the wealth of data and imagery that are fundamental to the European Commission’s Copernicus program. The use of satellite images in CAPAS has already been explained in this blog entry.

Description of the source datasets and result dataset

The main source datasets of Business Case CAPAS and main processes used to obtain output datasets are explained below:

LIDAR files

LIDAR files can be available under two different formats: .las and .laz. The LAS file format is a public file format commonly used to exchange 3-dimensional point cloud data between data users, being LAS just an abbreviation of LASER. LAZ files, due to the big size of LAS files, is the zipped version of the LAS format.

Although developed primarily for exchange of LIDAR point cloud data, LAS format supports the exchange of any 3-dimensional x,y,z tuples. This format maintains information specific to the LIDAR nature of the data while not being overly complex.

Technical description of LIDAR format
Technical description of LIDAR format

In the context of the ProDataMarket Project, LAS files used in the CAPAS business case will just be a collection of points (latitude, longitude, elevation).

Spanish LIDAR information is freely and openly available at


The information to be used in CAPAS business case is the Image Data (JPEG2000) provided by Copernicus at Sentinels Scientific Data Hub ( The description of JPEG2000[1] format is beyond the aim of this blog entry but some general ideas will be described.

Sentinel data are freely and openly available at:

More information and general factsheet at:

SIGPAC Database

SigPAC database is a complex information system that covers the whole Spanish geography and all agricultural activities and others related to Biodiversity and nature conservation.

In regards to SigPAC database, the main datasets produced or modified by CAPAS are:

  • Landscape Elements
  • Parcels and Subplots

The level of accessibility of SigPAC database varies depending on Autonomous Communities. For example, it is open and freely available in Castile at

Data workflow process for CAPAS

The following data workflow, as shown in the diagram below, illustrates the evolution of the different datasets, their transformations and their integration to generate the final result datasets.

CAPAS Workflow
CAPAS Workflow

LIDAR processing

The Grouping process gathers the LIDAR points using the following rules:

  • Errors, noise and overlaps are not taken into account (Classifications 1, 4, 7 and 12). As a consequence, more than 50% of points are removed from the process.
  • Soil, water and buildings have their own groups
  • Classification 19 is considered as short trees
  • Classification 20 is considered as medium trees
  • Classification 21 are 22 are grouped as tall trees

The result of this process is still a LAS file. The following image shows how LIDAR points (green points) have been processed and classified (Green points as trees, red points as soil, orange and yellow as bushes).


The next steps, such as Rasterization or Vectorization, involve topological rules in order to group the points to generate squares (raster) that would be processed to obtain the final vector shapefile.

The following image shows how LIDAR points have been grouped to create topologically connected surfaces. In the image below, yellow areas are Soil, orange are Bushes, green are Trees. Grey areas and blue surfaces (not present in this image) are Buildings and Water, respectively.


Once the trees class is defined in a raster format by LiDAR data, it wasrefined thanks to Sentinel Data which has more updated information. RGB and NDVI products help to identify which pixels have an NDVI value over 0.5 and it could be detected by RGB product in order to check which pixels represent vegetation areas.

Finally, trees auxiliary layer refined by Sentinel is processed to obtain different configurations:

  • Isolated trees
  • Copses

The final result of the process is a vector ESRI shape file, where the copses layer is a polygon feature type and the isolated trees layer is as point feature type. All of them have a direct correspondence with the landscape elements.

The overlaps between detected landscape elements, currently protected sites of Natura 2000 network and the Land Parcel Identification System allows performing an accurate ecological value report for Spanish crops areas.

LiDAR algorithm allows to obtain more detailed information because the landscape value helps to identify which subplot has more value per parcel, obtaining the following benefits:

  • Farmers will get an economical profit through fund-assignments to maintain these trees forms, and
  • the ecosystem and its species will be preserved.


This Ecological value report has been developed regarding the following queries:

  • Query 1: Surface of Sites of Community Importance (LIC) / subplot area.

Score between 0 and 1.

  • Query 2: Surface of Special Protected Areas for Birds (ZEPA) / subplot area.

Score between 0 and 1.

  • Query 3: Protected Sites Value = Sum of query 1 + query 2. Score between 0 and 2.
  • Query 4: Number of Isolated tree / subplot area. Score between 0 and 1.
  • Query 5: Surface of copses area / subplot area. Score between 0 and 1.
  • Query 6: Landscape Elements Value = Sum of query 1 + query 2. Score between 0 and 2.
  • Query 7: Ecological Value = Sum of query 3 + Query 6.

Sentinel Products generation

In the first place, Sentinel 2 (S2) imagery has to be downloaded from the ESA server. In the automatic download process developed, selection parameters were incorporated in order to download only the imagery that satisfies our quality criteria. Two kinds of products are generated from S2 imagery.

  • Simple products: Those which have been generated with one-date imagery. By an automatic process, TRAGSA is generating RGB products for supporting photo interpretation. Another simple product generated is the Normalized Difference Vegetation Index (NDVI) which is widely used for vegetation monitoring.
  • Complex products: Those which are generated with imagery from different dates. The following four thematic layers are going to be created.
    • Permanent grassland: This layer will be useful to determine photosynthetically active vegetation and non active (unproductive or bare soil) areas. Therefore it will help to monitor the maintaining of existing permanent grassland, which is an agricultural beneficial practice for the climate and the environment (REGULATION (EU) No 1307/2013).
    • Herbaceous and woody crops: By using decision algorithms, different crops can be identified. The results will be displayed in two different layers, one for herbaceous crops and other for woody crops.
    • Change detection layer: This layer will highlight areas where changes have happened. The layer will be focused on forests and grassland areas in order to detect dramatic changes, such as those caused by logging or forest fires, as well as to detect more subtle changes associated with AIS (Alien Invasive Species), diseases and reforestation.

Hitherto, only one of the twin S2 satellites (Sentinel 2A) has been launched. When the second satellite (Sentinel 2B) is on orbit, the revisit time at the equator will be 5 days which results in 2-3 days at mid latitude. This high revisit time will offer a quicker updating of SigPAC database in comparison with current updates that are based on low precision data (LANDSAT and SPOT5 satellites) or ortophoto flights generated by each Autonomous Community.

Final Result

As stated previously, Common Agriculture Policy funds Assignments Service (CAPAS) is a set of tools that improves the existing Common Agriculture Policy service (CAP), in order to innovatively manage and upgrade the CAP database provided by Spanish Administration to farmers and land owners. It is important to note that this CAP database is one of the main pillars of the CAP funds calculation systems. As mentioned earlier, the improvement process is based on the leverage of new cross-sectorial data sources from different fields and geographical areas, and the result datasets will be also available at the proDataMarket marketplace.

To use these new datasets as a powerful tool to define objectively the parameters of agricultural use of parcels, presence of landscape elements or temporal evolution of crops, the explained data processing and treatment algorithms have been, at the moment, partially developed.

As a summary, the usage of LIDAR files modifies some Parcel and Subplots features, and SENTINEL images will improve the definition of Parcel and Subplots land use and its temporal evolution.

The new datasets produced by CAPAS using those external sources will be RDFized and incorporated to proDataMarket platform. Therefore, Spanish rural property data, improved using new and underexploited datasets, will be accessible through proDataMarket platform providing the users with advanced visualization and querying features.

[1] JPEG 2000 (JP2) is an image compression standard and coding system. It was created by the Joint Photographic Experts Group committee in 2000

Visualizing subterranean infrastructure with Augmented Reality


The SIM application (Subterranean Infrastructure Map App and Service) is developed to ease construction and digging projects by visualizing underground infrastructure with augmented reality.

Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data[1]. The applications EVRY develop uses augmented reality technology to present cadastral data which is distributed by the proDataMarket platform. With a connection to proDataMarket, SIM downloads subterranean infrastructure data that exists at the user location. This data is then used to visualize the underground grid of pipes and cables as well as give information about the pipe or cable. If there are a lot of pipes in a given area there could potentially be too much information to augment at a time. The user can then filter out pipe groups (such as water, sewage, electricity) to be able to get a more relevant view.


Relevant information could be a pipes depth, the pipes owner as well as the age and material of the pipe. An issue with data like this is that it is often private. Data are also often owned by different actors, and a challenge is to give them incentive to share their data.


One of the major technical challenges the development team have been facing, is the lack of accuracy on mobile devices. The GPS receivers and built-in compass on mobile devices are not accurate enough to give an exactly correct representation of the pipe grid. It is possible however to increase the GPS accuracy by using an external GPS receiver. But even though the GPS is correct, a small error with the heading will still create unwanted results. In addition to positioning, another challenge is the data quality in a given area. To create a good augmented reality experience, the framework needs to know the height above mean sea level. This is not always given information in the data set.


To accommodate these challenges, SIM has a calibration functionality that can “move” the pipe grid according to a given heading. It also has a call to “Google Elevation Service” to get the pipe grids height so that it does not rely on elevation data. If the augmented experience is still not sufficient, SIM also includes a 2d Map so the user may get an overview of the pipe grid


If the user for some reason does not want to use the device camera (i.e. poor lightning conditions, broken lens etc.) or does not want to relocate to see the pipe grid, a Google Street View module is also implemented. This is a regular Google street view, with the pipe grid integrated so the user can stay at one location and see the pipe grid at another location.




New paper: Towards a Reference Architecture for Trusted Data Marketplaces

Towards a Reference Architecture for Trusted Data Marketplaces by Dumitru Roman and Stefano Gatti. 2nd International Conference on Open and Big Data, 2016.

  • Abstract: Data sharing presents extensive opportunities and challenges in domains such as the public sector, health care and financial services. This paper introduces the concept of “trusted data marketplaces” as a mechanism for enabling trusted sharing of data. It takes credit scoring—an essential mechanism of the entire world-economic environment, determining access for companies and individuals to credit and the terms under which credit is provisioned—as an example for the realization of the trusted data marketplaces concept. This paper looks at credit scoring from a data perspective, analyzing current shortcomings in the use and sharing of data for credit scoring, and outlining a conceptual framework in terms of a trusted data marketplace to overcome the identified shortcomings. The contribution of this paper is two-fold: (1) identify and discuss the core data issues that hinder innovation in credit scoring; (2) propose a conceptual architecture for trusted data marketplaces for credit scoring in order to serve as a reference architecture for the implementation of future credit scoring systems. The architecture is generic and can be adopted in other domains where data sharing is of high relevance.
  • Download paper


Data Workflow in SoE

The datasets and challenges in integration

The State of Estate (SoE) business case focuses on generating an up-to-date, dynamic and high quality report on State-owned properties and buildings in Norway. It collects and integrates several datasets as listed below. The datasets are originated from heterogeneous sources and of different quality. Here are some scenarios that will cause challenges in the integration process.

Matrikkel data

Though Matrikkel data from the Norwegian mapping authority is one the most authoritative source of property data, not all the information is up to date. It could be sometimes caused by the delay of administrative procedure in municipalities, and sometimes the owners don’t report change to the municipalities because of the high cost to report the change, and sometimes it could be typos and some other manual updating errors. The buildings less than 15 square meters are not required to be registered in the Matrikkel.

Statsbygg’s property data

The Statsbygg’s property data is updated since the last report. However, the Matrikkel’s building number is not correctly registered on all the buildings. The address information is not necessarily updated either. It could be also be typos and some other manual updating errors in the dataset.

Business Entity register

The Business Entity register dataset is from another national authoritative source with information of ministries and their subordinate organizations. However, not all the subordinate organizations of the ministries are registered as a sub-organization in the Business Entity register. The missing organizations need to be added manually as extra business entities to the dataset.

State-owned properties Report 2013-2014 (SoEReport2013)

The SoEReport2013 is a report from 2013 and it includes some properties or buildings that could be sold, rebuilt, demolished in the last few years. The old report also includes some non-reported ownership of properties and buildings in the government that we need to take care of in the new report. For example several properties were registered as owned by Statsbygg in the old report; however, they are registered as owned by the King in the Matrikkel database, which means that Statsbygg has taken care of the King’s property without reporting to the municipalities that ownership has changed.


The Matrikkel’s building number has not been registered on all the buildings in the ByggForAlle dataset and some of the key information could include typos, manual updating errors or be out-of-date too.

The data workflow

To meet the challenges in the data integration, we’ve developed a data workflow as shown in the diagram below. It illustrates the process of importing the datasets, quality control and integration of the datasets, and finally generating the result dataset. The involved roles and their activities are modelled as swimming lanes. The original and generated datasets are modelled as dataobjects in the diagram such as SoEReport2013, BusinessEntityRegister, NewOrgList_Comfirmed etc. The quality control process can be both machine automated and manual work based on human tasks and it will take care of the integration exceptions.


There are 3 roles involved in this process.

  • The SystemAdmin is a technical role and its main tasks are dataset import and integration.
  • The SystemManager is a functional role that has the main task of quality control and generating the SoE report including organizing and communication tasks with other involved organizations.
  • The PropertyResponsible is a role for each involved organization and its main task is to prepare data, quality control and submit its own property-list and building-list.

The activity boxes are explained as below:

  • ImportOldReportWithOrgList: SystemAdmin starts with checking if the SoE report from 2013 is imported. If not, the SystemAdmin imports the report which also includes the old organization list.
  • ImportMinistrySub_Brreg: Then the SystemAdmin imports the organization list of the Ministries and subordinate organizations from the Business Entity Register.
  • MergeOrgListBrreg_SoEReport2013: The two organization lists are merged.
  • EditComfirmOrgList: The SystemManager will get signal to start editing and updating the list, the result will be the confirmed OrgList.
  • ImportOwnedPropertyBuildingFromMatrikkelBasedOnOrglist_Comfirmed: Based on the confirmed OrgList, the owned properties and buildings from the Cadastre database (Matrikkel) are imported by the SystemAdmin.
  • PrepareExportForOwned: The property responsible will prepare a property list in a format as agreed.
  • ImportOwnedFromOrg: If some of the organizations such as Statsbygg have their own database or list of owned properties and buildings the lists will be imported as necessary.
  • ImportByggForAlleData: Then the ByggForAlle data is imported.
  • MergeAllDatasets: Afterwards data from Matrikkel and Business Entity Register (OrgList_comfirmed), the SoE reports 2013, properties data from organizations such as Statsbygg, ByggForAlle are merged by the SystemAdmin.
  • QualityControlMergedList: The SystemManager will then start the quality control cycle of the merged list.
  • EditAndConfirmOwnedList: The property responsible in each organization will get the task to edit and confirm their property and building list.
  • ApproveAndFinalizeNewSoEReport: The SystemManager will do the final quality control before approving and finalizing the new SoE Report.


Expected results and an example

Here below is one of the expected result from data quality control and integration in the step of “MergeAllDatasets”. The maps below shows both the examples of properties on the SoEReport2013 but not on the list based on Matrikkel_Brreg integration, and the properties on the Matrikkel_brreg integration but not on the SoEReport2013. After identifying the mismatches in this way, the users can work further on to clean the datasets to correct the wrong registrations in the source systems.

Symbol BRREG_Matrikkel integrated dataset Old SoE Report Example


land parcels filled with solid color Yes Yes “MATTILSYNET,MATTILSYNET,MATTILSYNET”


The figure below shows that inside the Campus Ås. Some land parcels owned/leased by NMBU and Statens vegvesen according to Matrikkel are not included in the old SoE report, those land parcels are marked with crosshatch pattern. On the other side, some land parcels from the old SoE report are not included in the list based on BRREG and Matrikkel, such as the hatched land parcel with the label “, NORSK INST.FOR SKOG OG LANDSKAP, NORSK INSTITUTT FOR SKOG OG LANDSKAP” or “,BIOFORSK, TOLLEFSRUD MARI METTE”. Both of the simple hatch and cross hatch properties in the map need to be quality check and confirmed by the step of “QualityControlMergedList” and thereafter “EditAndConfirmOwnedList”.


proDataMarket at the European Data Forum 2016

On 29 and 30 June proDataMarket participated in the European Data Forum (EDF) 2016, organized by Amsterdam Data Science and Technical University of Eindhoven under the auspices of the Dutch presidency of the European Union.


The conference, held in the Conference Center and former museum of Science and Technology Evoluon (Eindhoven, NE), was attended by Commissioner Günther Oettinger, the Rector of University of Tilburg and Philips, Siemens and TomTom CEOs. The event brought together more than 600 attendees from across Europe and multiple technology sectors.

General View

Likewise, proDataMarket presented a descriptive poster of the project, explaining its development and the conclusions reached so far in the different business cases and data-marketplace central infrastructure, and how proDataMarket aims to disrupt the PD market and demonstrate innovation across sectors where Property Data is relevant, by integrating technical framework for effective publishing, data consumption and showcasing data-driven business products.


Besides the main event, the IQmulus project organized a workshop addressing Geospatial, Mathematical and Linked Big data. This event addressed aspects of big data where geolocation, geospatial or mathematical structures have a central role. In this side-event, the project coordinator, Dr. Dumitru Roman, also explained the whole project and its Business Cases.

Proof of Concept with Augmented Reality


The potential of the proDataMarket platform is huge, and by letting third party actors use and contribute to the “big data” platform, the potential could be even greater. To show how proDataMarket can be utilized, EVRY is developing two mobile applications that rely on proDataMarket service. The applications combine data from proDataMarket along with “augmented reality technology” to give the user a visual representation of the data. By doing this, EVRY will help contractors, construction or municipalities visualize future building projects. This is done with two iPad applications. The first application show underground infrastructure such as pipes and cables. The other application augments a 3D model in a real world scene.

Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data [1]. The applications EVRY develops uses augmented reality technology to present cadastral data, distributed by proDataMarket. By doing this the applications can show underground structure on the screen (through the device camera), as well as 3D models of future building projects in a “real world scene” with information about the surroundings. This is done by having a 3D-model with correct measurement data (relative to its real world size), and by knowing the distance between a desired location and the user, the model can be scaled to the correct size according to the distance. Of course, if the user decides to manipulate the model (i.e. scaling it up), the size/distance relationship will be invalid. The 3D model augmentation can ease both private and commercial building projects by giving a visual presentation of how a building may look in a landscape.

The development process has been a process of trail and error and different augmented reality SDK have been examined. In the end the development team chose “Wikitude SDK [2]” to handle the augmentation processing. The task of augmenting a custom 3D model at a desired location is a suitable task for Wikitude SDK. By setting the model as a “Point of Interest” (POI) and using “GeoLocation”, the user can set the model at a desired location in a 2D map (Google map).


The model will be scaled to the correct size relative to the distance from the user. When a model is placed, Wikitude will augment the model and the user can see and manipulate with onscreen controls.


The manipulation controls are necessary because the iPad compass and location service are not accurate enough to get a satisfying result. If a user needs to place a model at a very exact location, there must be some way to tweak and calibrate the model. All in all, there are still some bugs left to fix in the applications, but the main functionality is in place and we are looking forward to show demos of what we have made.



Cerved and SpazioDati at Data Driven Innovation 2016

Cerved and SpazioDati participated in the first edition of Data Driven Innovation 2016 with a presentation and a stand about preliminary results of their collaborative work in the ProDataMarket project.

Cerved & SpazioDati present the first prototype for proDataMarket @DataDrivenInnovation 2016
Cerved & SpazioDati present the first prototype for proDataMarket @DataDrivenInnovation 2016


Data Driven Innovation is an open summit about big data hosted by Roma Tre university and organized by Codemotion. During two days of the summit many people have had the possibility to see the first results of Cerved & SpazioDati proDataMarket project: the Cerved Scouting Terrain Service (CST), an interactive map showing Bologna territory scores and social demographic scores, as the social disease index, the economic disease index, the socio-demographic score and much more territory scores.

CST, 2d business case of Cerved: Employees of the working population in Bologna
CST, 2d business case of Cerved: Employees of the working population in Bologna


CST is the second business case Cerved is being developed within the proDataMarket project: the goal of this service is to provide target users with a tool to search and see property and territory information on a map. In order to achieve this, Cerved is developing value-added geo-marketing indicators, analyses and visualisations.

Authors: Claudio Castelli & Diego Sanvito

ProDataMarket place as a toll for connecting real-estate data publishers and prospect data consumers

The main objective of the ProDataMarket project is to create a data marketplace for open and proprietary real-estate and related contextual data.

Marketplace is a place where data producers meet prospect data consumers. In addition to basic features for making data accessible and discoverable, marketplace can provide more tools to help data producers “advertise” their data and better engage with potential data consumers. Among such tools are those that help data producers explain the type of their data, its attributes and demonstrate its value. In this post we discuss how these tools are being realised in the ProDataMarket place.

Driving example

Let’s consider a national statistical office, for example, the Italian National Institute of Statistics (ISTAT). ISTAT wants to disseminate one of its datasets, a dataset with census cells that cover the Italian region of Piemonte. This dataset subdivides the region of Piemonte in census sections according to ISTAT’s 2011 National Census. A census section is the smallest geographic unit for which the statistical variables of a population census are taken.

ISTAT is interested in explaining to the prospect data consumers that the data can be useful when it is needed to:

  • determine inter-municipal boundaries
  • describe different areas of a city in terms of some geographically-bound characteristics

Marketplace: initial steps

Figure 1 illustrates initial steps that ISTAT performs at the marketplace to present her data.

Figure 1: The data producer prepares, describes and publishes her data at the marketplace, to make accessible and discoverable.


ISTAT prepares its data for publication, describes and catalogues it. Now, a prospect data consumer can discover and explore the dataset of census cells of the Piemonte region. While ISTAT made the data accessible and discoverable, data consumers still have to figure our themselves what type of data it is, what is inside and what is it useful for.

Marketplace: explaining the data types

To explain the type of the data, ISTAT creates and attaches visualisations to its data, as shown in Fig. 2.

Figure 2: The data producer creates visualisations, to explain the type of the data


In addition to preparing, describing and publishing Piemonte census sections dataset, ISTAT can create a map of all the census cells of the Piemonte region. This gives an illustrative example of the data to the prospect data consumers: when exploring the dataset, the data consumer can immediately see that the data contains polygons, each of which represents a geographic area of a census section.

Now that the type of the data is clearer, ISTAT can go further and explain various attributes of the data.

Marketplace: explaining attributes of the data 

Figure 3 illustrates steps that ISTAT performs at the marketplace, to give the data consumers a glimpse of the data attributes.

Figure 3: The data producer queries the data, to explain data attributes.


As mentioned above, the dataset of the driving example contains census cells’ geometries. Every cell is attach to a certain municipality. This information becomes useful if one wants to represent single municipalities on a map. For example, to represent the city of Turin, ISTAT can prepare a subset of the census cells by filtering on the municipality attribute of each cell. Similarly, other attributes of the data can be highlighted.

Marketplace: putting data into context to explain its value

With the help of the marketplace, ISTAT can prepare, describe and visualise as many subsets of the data, as she wants to. Finally, to showcase the value of the data and explain to the data consumer its value, ISTAT can put census cells into context, as illustrated in Fig. 4.

Figure 4: The data producer augments its data from other data sources, to show the “value in context”.


This last approach is realised through the Augmentation Service that supports querying a co-located data source using several functions to produce a new dataset. Currently, the Augmentation Service uses data from OpenStreetMap, to provide context. For example, ISTAT can use the service to extract the number of bus stops found nearby each census cell, or the distance to the closest train station, or the length of pedestrian paths in each census cell. Once the new augmented dataset is prepared, ISTAT can proceed with visualisations. For example, she can create a coloured map to show density of nearby bus stops in Turin.