The cloud has become a very dominant method for managing compute, storage and application infrastructure.  As application providers have migrated their on premise business models into a centralized cloud environment they offer a more consistent user experience and have reduced the overall complexity but inadvertently have produced more data silos.  These silos are particularly difficult to navigate when trying to aggregate critical data for analysis and reporting.  There are two use cases for edge computing; edge computing for data generation and edge computing for data aggregation and analysis.

 

For many industrial applications where large amounts of OT data is generated.  The concept of edge computing implies that the collection, analysis and decision support processes are performed  at the source of the data.  In these applications, signal data is generated in the warehouse or on the factory floor – in some cases the data is generated in remote locations.  Instead of forwarding all of the collected data to a cloud based centralized system – and incurring unnecessary network and computing costs, edge computing allows the signals to be filtered and processed directly at their source.  Reducing latency and costs.


Migrate your lab developed AI and ML models directly to the edge

One of the challenges of migrating AI and ML models is providing the right data from production systems in real time.  With the HarperDB platform, data can quickly flow from the nodes across the network and seamlessly combine with real time data on the edge.  Since the data models are persistent and accessible with standard API’s, the models can run on the edge on your schedule.

 

Eliminate the Noise

When collecting real time data from systems, a majority of the data is not relevant.  Using the HarperDB platform, you can ingest all of your data and then perform real time filtering and analytics to eliminate the noise and concentrate on critical information.

 

Work Offline or with Intermittent Networks

All instances of HarperDB are completely independent and therefore can function without network connectivity.  When network is available, the data queued for replication can resume without losing any data.


In this industrial example, the speed of the production conveyor directly affects the quality of the resulting product.  Sensors directly measure several parameters of the process and send these signals to a SCADA system – used to visually control the overall process.  In this example a HarperDB node was installed on the control network along with an AI model that was developed by data scientists.  The AI model required a persistent and consistent set of data at specified intervals and would then adjust the speed of the conveyor to optimize the product.  The model was built in the lab but was prone to crash during early production trials.  By collecting the data into a HarperDB node allowed the model to access data using simple SQL queries and then write the results directly to the node where it could be displayed on a dashboard to the operator.  The operator can then adjust the conveyor speed based on the AI result.


Ready to Try HarperDB?

 

Our newest product, HarperDB Cloud is a great way to get started on your edge to cloud project.

We are looking for developers to beta HarperDB Cloud and we are offering $250 in credits to make your project a reality. Be up in running in five minutes and let us worry about managing and hosting your database so you can focus more on code and worry less about DevOps.