Data Protocol
Data is everywhere, but the most significant issue is that these datasets are unstructured, making them harder to consume. Druid Data Protocol allows for the creation of a structured and queryable dataset and is implemented as standard on the Druid Chain.
Smart Contract

Every dataset is an extension of a smart contract on the Druid Chain. Participants have the capability to implement a smart contract that defines the structure and functionality of the dataset on top of the default functions to query the dataset.

IoT / Survey

The primary focus of the protocol on the Druid Chain is to receive data from IoT sensors or surveyors on the ground from various ESG or impact projects. It's built to identify the source of the data and verify the claims.

Automation

The protocol is built to automate the process of collecting data. A sensor is capable of broadcasting its data directly to Druid Chain without manual intervention. A machine learning process can automatically ingest this data to perform analysis on the data in real time.

Features
The protocol has many moving parts, and each is built to perform a very specific role. Here is a quick overview of some of the key processes that define the Druid Data Protocol:
Schema
Every dataset on the Druid Chain follows a schema. A dataset simply can not exist without it to ensure all data is structured.
Versioning
Data structure sometimes changes to accommodate its needs. Versioning is there to accommodate the changes.
Generator
Every set of data must have an origin. From IoT devices to manual entry, these generators start the data's journey.
Validator
All data is validated against the schema and version to ensure the data is structured and the required data is present.
Certified
To ensure authenticity, data must be verified for authenticity by single or multiple on-chain or off-chain providers.
Finaliser
The data is finalised for use when all the conditions are met to certify and validate the data.