Executive Summary

The I2M Platform (an acronym for Intelligence to Manufacturing) is an enterprise-grade software appliance for continuous control and optimization of industrial plants.

Based on almost 30 years of know-how, it’s framework is based on a structured, holistic approach to extract actionable insights from process data to achieve business value.

From data acquisition to intelligence layer feedback, it creates an end-to-end real-time loop with a cascade of algorithms that powers the predictive engine, addressing all the challenges involved in machine learning applied to manufacturing.

In order to provide a robust Service Level Agreement (SLA), compatible with industrial standards, it is compliant with requirements and best practices established by the world’s leading organizations in Industrial IoT (e.g. Plattform Industrie 4.0, IIC, NIST, ISA).

The Platform was launched at Hannover Messe 2018, and has been used in a large spectrum of applications, including both continuous processes (plastics, steel) and discrete manufacturing (automotive industry). Currently, it manages about US$1 million in production per day worldwide.

Cloud (IT)

(1~10 s latency)

Digitizes manufacturing within a structured, exportable dataset that tracks operations and describes the physical process over time, as well as its operational context. […]

Continuously monitor production with a suite of Unsupervised Machine Learning algorithms to detect abnormalities, autonomously applying analytical results. […]

Provides data-driven analytical results for decision support on the operational level (e.g. quality inspections, setpoint adjustments, maintenance planning). […] 

INDUSTRIAL
INTERNET

MACHINE
LEARNING

ACTIONABLE
INSIGHTS

Capture/generate, structure, communicate and persist all relevant data to describe the manufacturing process within that time frame.

Deploy the Supervised and Unsupervised Machine Learning algorithms that fit within that time frame, autonomously applying analytical results. […] 

Provide data-driven actionable insights for decision support within that time frame, allowing stakeholders to optimize processes at all levels.

Shopfloor (IT/OT)

(10~100 ms latency)

Industrial Internet Consortium (IIC)

The Industrial Internet Consortium was founded in March 2014 to bring together the organizations and technologies necessary to accelerate the growth of the industrial internet by identifying, assembling, testing and promoting best practices.

Plattform Industrie 4.0

Joint platform of German federal ministries and high-ranking representatives from industry, science and the trade unions focused on the development of operational solutions to maintain the country’s edge on the fourth Industrial Revolution.

National Institute of Standards and Technology (NIST)

The National Institute of Standards and Technologies (NIST) is dedicated to supporting U.S. competitiveness in areas of national importance from communications technology and cybersecurity to advanced manufacturing and disaster resilience.

Hannover Messe

The I2M Platform had its global launch at the Digital Factory sector of Hannover Messe 2018, the world’s leading trade fair for industrial technology – a global hotspot for the digital transformation of industry spearheaded by Germany.

NONE
Idustrial Internet Consortium (IIC)
The Industrial Internet Consortium was founded in March 2014 to bring together the organizations and technologies necessary to accelerate the growth of the industrial internet by identifying, assembling, testing and promoting best practices.
Plattform Industrie 4.0
Joint platform of German federal ministries and high-ranking representatives from industry, science and the trade unions focused on the development of operational solutions to maintain the country’s edge on the fourth Industrial Revolution.
National Institute of Standards and Technologies (NIST)
The National Institute of Standards and Technologies (NIST) is dedicated to supporting U.S. competitiveness in areas of national importance from communications technology and cybersecurity to advanced manufacturing and disaster resilience.
Hannover Messe
The I2M Platform had its global launch at the Digital Factory sector of Hannover Messe 2018, the world’s leading trade fair for industrial technology – a global hotspot for the digital transformation of industry spearheaded by Germany.

Cloud (IT)

(1~10 s latency)

Digitizes manufacturing within a structured, exportable dataset that tracks operations and describes the physical process over time, as well as its operational context.

Continuously monitor production with a suite of Unsupervised Machine Learning algorithms to detect abnormalities, autonomously applying analytical results.

Provides data-driven analytical results for decision support on the operational level (e.g. quality inspections, setpoint adjustments, maintenance planning).

INDUSTRIAL
INTERNET

MACHINE
LEARNING

ACTIONABLE
INSIGHTS

Capture/generate, structure, communicate and persist all relevant data to describe the manufacturing process within that time frame.

Deploy the Supervised and Unsupervised Machine Learning algorithms that fit within that time frame, autonomously applying analytical results.

Provide data-driven actionable insights for decision support within that time frame, allowing stakeholders to optimize processes at all levels.

Shopfloor (IT/OT)

(10~100 ms latency)

Industrial
internet

 

“The foundation [of Industrie 4.0] is the availability of all relevant information in real time, by connecting all elements participating in the value chain, combined with the capability to deduce, from the data at any time, the optimal flow in the value chain.

By connecting humans, objects and systems, dynamic real-time optimized and self-organized inter-company value networks are created, which can be optimized according to different criteria – costs, reliability and resource consumption.”

Plattform Industrie 4.0

Machine-to-Machine (M2M) Communications

Models are developed to structure data so that computer-based systems can understand the logical interpretation of data sets (Machine-to-Machine Communications):

  • identify and address gaps and opportunities in data collection;
  • captures the appropriate metadata as it is generated, in the sensor/actuator layer;
  • uses communication protocols (e.g. OPC UA, PROFINET) to provide integrability;
  • uses the platform’s syntactic and semantic models to provide interoperability;
  • provides a context to raw data that can allow for checks of assumptions as well as support for automated reasoning and decision-making.

During implementation, DataBot’s Operations team models the client’s process, mapping all relevant IT and OT data sources as well as their communication protocols:

  • PLC tags (e.g. temperature, position, torque, voltage, pH);
  • inspection results (e.g. scrap/rework status, noncompliance codes);
  • traceability (e.g. RFID, serial number, timestamp, W.O.);
  • production execution (e.g. recipes, BOM);
  • shopfloor, factory and enterprise-wide IT and OT systems (e.g. SCADA, MES, LIMS, ERP, CRM, BI).

    The Platform’s Machine-to-Machine Communications engine provides native support for several Ethernet interfaces (COTS).

    Other interfaces – including ones for legacy systems – can be requested as Modified Commercial Off-The-Shelf (MCOTS) – subject to technical analysis and contractual addendum.

     

    LEVEL COTS MCOTS
    1
    • OPC UA
    • S7 Ethernet (ISO over TCP)
    • Modbus
    • OPC DA*
    • Standard industrial internet protocol
    • Gateway RS232/485 to Ethernet
    2
    • OPC UA
    • OPC DA*
    • MySQL, SQL Server
    • Access*
    3, 4
    • I2M API (web server)
    • .txt, .csv
    • SQL Server, MySQL, Oracle
    • Web server
    • Specific API

    * for legacy systems

     

    Big Data applications must deal with the 5 V’s:

    • Velocity: the speed at which vast amounts of data are being generated, collected and analyzed;
    • Volume: refers to the incredible amounts of data generated each second from manufacturing;
    • Value: to ensure that ultimately the data that is reaped can be monetized;
    • Variety: defined as the different types of data that can be used;
    • Veracity: the quality or trustworthiness of the data;

    Big Data becomes so complex that in fact we can no longer store and analyze data using traditional database technology. The use of relational databases leads to problems due to:

    • fixed schema, which makes them ill-suited for changing business requirements, as schema changes are problematic and time-consuming;
    • insufficient performance (too low) and latency (too high) for the new requirements; and
    • limited ability to scale cost-effectively.
    NoSQL systems are distributed, non-relational databases designed for large-scale data storage and for massively-parallel, high-performance data processing across a large number of commodity servers.

    For this application, we built a NoSQL Database on top of LevelDB, a “Key-Value Store” engine developed by Google. To speed up the “insert phase”, we created an In-Memory Layer to cache data during I/O tasks. With this approach, we can store any kind of manufacturing information, including entirely PLC RAM blocks, in a grid of servers (data center and Edges) with a real-time performance. 

    As always, cyber security is a must and data “in rest” at NoSQL files are protected by cryptographic algorithms.

    Edge Computing

    Edge Computing is a paradigm that leverages the advantages of Cloud Computing while mitigating it’s drawbacks. Rather than processing all the data from the shopfloor on-site or sending it all to the cloud for analysis, the I2M Edge Server dynamically distributes analysis along the computational resources available from endpoint to endpoint, improving connectivity:

    • Bandwidth: only aggregated data is sent to the cloud, reducing communication bandwidth requirements and improving data transfer time;
    • Latency and jitter: move intelligence closer to the shopfloor, applying latency-aware AI algorithms at the edge to optimize processes autonomously in real-time;
    • Availability and Resilience: up to 48 hours of continued coverage while offline from the cloud;
    • Privacy and Security: protects data by anonymizing and encrypting it close to the source, before sending it to the cloud.

      Edge computing is a decentralized computing infrastructure in which computing resources and application services can be distributed along the communication path from the data source to the cloud. That is, computational needs can be satisfied ‘at the edge,’ where the data is collected, or where the user performs certain actions.”

      Introduction to Edge Computing in IIoT
      Industrial Internet Consortium

      Efficient system design and a robust architecture allow the I2M Platform to store and process data where it is most useful. This allows critical interactions between assets to be dynamically coordinated within a timeframe that allows for a meaningful decision, without disrupting manufacturing:

      • Peer-to-peer networking (M2M), e.g. robots communicating about a product that has left from one’s scope to the other’s;
      • Edge-device collaboration, e.g. robots in a factory optimizing setpoints based on autonomous benchmark;
      • Distributed queries across data stored in devices, in the cloud and anywhere in between;
      • Distributed data management, defining where and what data is to be stored, and for how long;
      • Data governance including quality, discovery, usability, privacy and security aspects of data.
      • Analytical tools deployed in the Levels 1-2 layers operate in latency compatible with theirs, i.e. seconds, milliseconds.

      Since M2M Communications and Artificial Intelligence (AI) are not hindered by human reaction times, meaningful action can be taken to optimize processes on a second or sub-second basis.

      To enable this autonomous real-time management of operations within industrial bandwidth constraints, analytics must be latency-aware so that the right information is available at the right time for any given system or subsystem (e.g. 10 ms sampling, 100 ms communications, 1 s optimizations, 10 s tasks).

      • Data (as well as the appropriate metadata) is dynamically captured, contextualized, abstracted, analyzed, shared, stored, and ultimately discarded according to it’s validity, relevance, scope and time-sensitivity.
      • Analysis is performed within the appropriate time-frame to allow for meaningful reaction. e.g. 100 ms communications, 1 s optimizations, 10 s tasks.
      • Stacks of tasks and operations are dynamically organized to ensure they are sequenced and prioritized according to time-sensitivity and computational resource consumption.

      Through efficient architecture and Artificial Intelligence, the I2M Platform is capable of maintaining real-time latency and deal with events and data variation within a time window of 10 ms (IEC 61784-2 Conformance Class CC-B) or 100 ms (IEC 61784-2 Conformance Class CC-A).

        The edge must be at least two different network cards: one to connect to industrial ethernet VLAN, and other to corporate backbone. Depending on the factory ethernet infrastructure, the architecture can be:

        • Edge L1 connecting the industrial Ethernet VLAN directly to the data center – for small or time-critical applications;
        • Edge L2 connecting the industrial Ethernet VLAN directly to the data center – the standard;
        • Edge L1 connecting the industrial Ethernet VLAN to an Edge Level 2 and then to the data center – for huge factories or time-critical applications.

        In all cases, the edge level acts like a DMZ (demilitarized zone) and protect all data “in motion” to data center with cryptograph algorithms in a security tunnel (VPN – virtual private network).

        The I2M Platform is an appliance software developed in ANSI C/C++ with all integrated functionalities, without externals dependencies beyond the operational system to manage CPU, memory, Ethernet and storage.

        Efficient architecture and system design allows the System to maintain high performance with minimal hardware requirements for implementation, wether on a physical machine (e.g. Edge L1, PLC) or as a virtual one (e.g. Edge L3, datacenter).

          Edge Level 1:

          • Windows 10 (64 bits)
          • 2 cores,  2 Gb RAM
          • Free HD: 100 Gb (estimate)

          Edge Level 3:

          • Windows Server (2012/R2 to 2019) (64 bits)
          • 4 cores, 8 Gb RAM
          • Free HD: 100 Gb (estimate)

          Ethernet: 10 Mbps with internet access (for SaaS Licence).

          Estimated bandwidth consumption for each server:

          • Level1: About 1.0 Mbps per PLC (VLAN) 
          • VPN: About 1.0 Mbps (internet)
          • User: The real bandwidth consumption depends on the application (number of users and type of view), but a typical large view consumes about 0.8 Mb per JSON document.

          VPN Firewall:

          • Portal: i2m.pkb.cloud
          • DataCenter: [customerID].pkb.cloud (fixed IP)
          • VPN/API Port: 8443 and 1770 (TLS 1.2)

          Remote Desktop Connection:

          • Commissioning, upgrade and eventual technical support

            PRODUCT TRACKING

            PROCESS SAMPLING

            Logs automation alarms and continuously (~10 ms latency) tracks dozens of continuous process features to create a dataset that describes the physical process’ behavior over time.

            e.g. temperature, pressure, voltage, pH, rpm, kg/h.

            PERFORMANCE TRACKING

            Continuously computes operational and financial KPI scores for the manufacturing process.

            • TEEP, OEE
            • Availability, MTBF, MTTR
            • Quality, Gross, Net, NC/rejected
            • Performance, Starved/Blocked
            • Rework, Yield (total & direct)
            • Customized KPIs (MCOTS)

            PRODUCTION TRACKING

            Manufacturing Execution System (MES) module to manage production as well as input data and events.

            • Production order sequencing
            • TrackID, Part Number, SKU
            • Recipes, BOM, laboratory measurements
            • Operator/Work shift tracking
            • Maintenance interaction
            • Quality inspection results

            Can be configured to autonomously consume data from third-party applications via the Platform’s API (e.g. SCADA, MES, LIMS, ERP, CRM).

            Datastreams from the shopfloor are continuously (~10 ms latency) captured, structured, encrypted, communicated and persisted in the I2M Edge Server. This creates the Digital Thread, a synchronized, detailed information flow that describes the physical process as well as its context.

              NONE
              PRODUCT TRACKING

              Timestamps every discrete operation, generating and/or capturing all relevant traceability data and metadata, allowing products to be tracked over time.

              This dataset includes all relevant discrete features and characteristics, either for the product itself or for each operation that compose its manufacturing process.

              e.g. cycle time, weight, length, torque, angle
              e.g. Quality Assurance, laboratory measurements

              Analytical results from the Platform’s Predictive Engine are associated with the corresponding TrackID, allowing the System to identify abnormal products and flag them as Outliers – as well as find others with similar patterns.

              PROCESS SAMPLING

              Logs automation alarms and continuously (~10 ms latency) tracks dozens of continuous process features to create a dataset that describes the physical process’ behavior over time.

              e.g. temperature, pressure, voltage, pH, rpm, kg/h.

              Data can be queried via the Historian with detailed charts and interactive graphs for data visualization. Statistical information regarding process curves (e.g. mean, sigma, min/max) are also available in both formats.

              Events (e.g. setpoint adjustments, operator input, automation alarms) are also timestamped and persisted in the dataset, so that their effects on productive outcome and process behavior can be assessed.

              PERFORMANCE TRACKING

              Continuously computes operational and financial KPI scores for the manufacturing process. Data is used in the

              • TEEP, OEE
              • Availability, MTBF, MTTR
              • Quality, Gross, Net, NC/rejected
              • Performance, Starved/Blocked
              • Rework, Yield (total & direct)
              • Financial KPIs (e.g. cost, cost per unit)
              • Customized KPIs (MCOTS)

              Benchmark assets across the supply chain

              PRODUCTION TRACKING

              Manufacturing Execution System (MES) module to manage production as well as input data and events.

              • Production order sequencing
              • TrackID, Part Number, SKU
              • Recipes, BOM, laboratory measurements
              • Operator/Work shift tracking
              • Maintenance interaction
              • Quality inspection results

              Can be configured to autonomously consume data from third-party applications via the Platform’s API (e.g. SCADA, MES, LIMS, ERP, CRM).

              ARTIFICIAL INTELLIGENCE 

              “Industrial analytics can be applied to machine-streaming data received from disparate sources to detect, abstract, filter and aggregate event-patterns, and then to correlate and model them to detect event relationships, such as causality, membership, and timing characteristics.

              Identifying meaningful events and inferring patterns can suggest large and more complex correlations so that proper responses can be made to these events.

              Industrial analytics can also be used to discover and communicate meaningful patterns in data and to predict outcomes.”

              Industrial Internet Consortium

              Clusterization

              The I2M Platform uses its proprietary clusterization algorithms (a version of DBSCAN‍ that has been optimized for datasets with a large number of features) to autonomously group unlabeled datasets of similar properties.

              This analysis can be done along any number of dimensions (production features e.g. temperature, pressure, pH, voltage, position, time) to provide a comprehensive view of the manufacturing process.

              Parameters for clusterization (i.e. Eps, MinPoints) are adjusted during the Data Science phase until clusters reach appropriate granularity. The platform characterizes them with statistical information for each feature (e.g. min, max, mean, sigma), as well as how each feature affected clustering results (e.g. EpsOff, Out, Delta).

                There are several different algorithms for clusterizing datasets, and appropriateness is a product of both the datasets’ characteristics (e.g. shape, number of dimensions) as well as the system’s constraints (e.g. bandwidth, latency).

                For its’ Smart Manufacturing module, the I2M Platform employs Density-Based Spatial Clustering of Applications with Noise (DBSCAN), a clusterization method that is significantly more resource-effective than its counterparts, as shown in the figure.

                  Classification

                  Once training is done, the I2M Platform starts using Deep Learning, a subset of Machine Learning that relies on algorithms that emulate the human brain: Advanced Neural Networks (ANN). This approach is only made possible due to the enormous amount of data accumulated by the platform, as well as the powerful computational capabilities of the I2M Cloud.

                  In this phase, rather than performing batch analytics to clusterize datasets, the platform uses the results from training to classify processes in real time, using Fuzzy Logic to attribute scores and determine what cluster (or combination of clusters) best represents each fraction of the process.

                    Data Science

                    Unsupervised Learning classifies processes by their physical properties in order to group similar processes and distinguish them from those with different profiles. These clusters can then be correlated with production results either manually or automatically (e.g. quality inspection results linked to production traceability via TrackID, timestamp etc), classifying each one of them as desirable/undesirable.

                    This leads to the Supervised Machine Learning phase of the Data Science process, when the platform is taught which of the clusters are desirable and which are not, as well as the thresholds for these distinctions. This allows the platform’s AI to automatically enforce production policies that steers processes towards desired profiles, and away from undesired ones.

                    In order for the information drawn from Machine Learning to provide valuable insights to improve the process, the ML process itself has to adhere to the manufacturing context, so that models can aptly represent the real world conditions they represent.

                    DataBot and the client thus participate in a collaborative Data Science process to adjusts the parameters (e.g. EPS in Clustering as an approximation of acceptable error/deviation for each process feature) and the expected results of algorithms in order to ensure their applicability.

                    This approach creates Expert Algorithms, modeling the reasoning of experts in that given process, empowering them with automated conclusions based on a volume of data that would be impractical for them to analyze. It improves labor efficiency at all technical levels by empowering the workforce with intelligent insights rather than raw data.

                    Artificial Intelligence

                    Since the Edge Server maintains a real-time representation of all I2M Smart Assets (Digital Twin) and is able to communicate seamlessly with them (M2M), actions and changes of states in the Digital Twins can be instantly translated into equivalent operations in the physical asset.

                    As a result, all the computational tools available for the Digital Twin can be applied in the physical process in real time (~100 ms latency). This creates a layer of intelligence corresponding to the Control functional domain (IIC-IIRA) in Industrial Control Systems (ICS), on Level 1 (ISA-95.03) of the manufacturing process.

                    It deploys real-time (10~100 ms), AI-powered, latency-aware, closed-loop feedback controls that read data from sensors, apply rules and logic, and exercise control over the physical systems through actuators.

                    • Seamless M2M communication and coordination between I2M Smart Assets (interaction of digital twins are carried out by the physical twins);
                    • Continuous optimization of the process by pursuing operational conditions that have been linked to improved results (e.g. quality of goods, throughput);
                    • Continuous avoidance of operational conditions that have been linked to undesirable results (e.g. scrap/rework, increased asset deterioration);
                    • Deploy state-of-the-art IT algorithms directly on the shopfloor (e.g. Artificial Intelligence);
                    • Remote control of production to authorized personnel via the I2M App;
                    • Autonomous communication and coordination between I2M Smart Assets;

                      The I2M Platform uses its proprietary clusterization algorithms (a version of DBSCAN‍ that has been optimized for datasets with a large number of features) to autonomously group unlabeled datasets of similar properties.

                      This analysis can be done along any number of dimensions (production features e.g. temperature, pressure, pH, voltage, position, time) to provide a comprehensive view of the manufacturing process.

                      Once training is done, the Platform’s Predictive Engine begins real-time Classification of the productive process based on the Classes established in the Clusterization stage.

                      NONE
                      PRODUCT CLASS

                      Clusterization and Classification of discrete product features (e.g. cycle time, weight, length, torque, angle) for specific time windows or TrackIDs, as well as the operations that compose their productive process.

                      The Clusterization engine can uncover patterns such as how production mix affects average cycle times, so that production planning can be optimized.

                      On the shopfloor layer, the Classification engine can detect disturbances and abnormalities as Outliers (e.g. a disruption causing the takt time of a given operation to fluctuate abnormally).

                      PROCESS CLASS

                      Clusterization and Classification of continuous process features (e.g. temperature, pH, voltage, rpm, kg/h) for specific TrackIDs, operations, time frames or samples (~10 ms).

                      The Clusterization engine uncovers patterns and constraints across these continuous dimensions. Managers can, for instance, craft data-driven Standardized Work Instructions and Recipes with setpoints that take these relationships into account.

                      On the shopfloor layer, the Classification engine can detect undesirable productive patterns in real time, reacting autonomously (e.g. flag them for Quality Inspection).

                      BEHAVIOR CLASS

                      Clusterization and Classification of Feature Variation (Mean/Sigma, an overall index of curve instability) for each continuous process feature.

                      In the Clusterization engine this can be correlated to previous classes to uncover patterns in how setpoints and operational conditions affect process stability – and how it in turn affects productive performance.

                      The Classification engine uses these results to detect abnormal and undesirable behavior patterns in real time, alerting stakeholders if it surpasses pre-established thresholds and ranges.

                      PERFORMANCE CLASS

                      Clusterization and Classification of KPI scores (e.g. OEE, Net/Gross, MTBF, Starved/Blocked) for specific processes (e.g. a given TrackID’s production) or time frames (e.g. per shift).

                      The Clusterization engine provides an overall index of productive performance that can be correlated to previous classes to uncover patterns in how different process behavior profiles impact operational and financial outcome.

                      The Classification engine can detect fluctuations in performance in real time, alerting stakeholders if it declines or is insufficient to meet production goals.

                      PRODUCT CLASS

                      Timestamps every discrete operation, generating and/or capturing all relevant traceability data and metadata, allowing products to be tracked over time.

                      This dataset includes all relevant discrete features and characteristics, either for the product itself or for each operation that compose its manufacturing process.

                      e.g. cycle time, weight, length, torque, angle
                      e.g. Quality Assurance, laboratory measurements

                      PROCESS CLASS

                      Clusterization and Classification of discrete product features (e.g. cycle time, weight, length, torque, angle) for specific TrackIDs as well as the operations that compose their productive process.

                      The Clusterization engine can uncover patterns such as how production mix affects average cycle times, so that production planning can be optimized.

                      On the shopfloor layer, the Classification engine can detect disturbances and abnormalities as Outliers (e.g. a disruption causing the takt time of a given operation to fluctuate abnormally).

                      BEHAVIOR CLASS

                      Clusterization and Classification of KPI scores (e.g. OEE, Net/Gross, MTBF, Starved/Blocked) for specific processes (e.g. a given TrackID’s production) or time frames (e.g. per shift). 

                      The Clusterization engine uncovers patterns and constraints across these continuous dimensions. Managers can, for instance, craft data-driven Standardized Work Instructions and Recipes with setpoints that take these relationships into account.

                      On the shopfloor layer, the Classification engine can detect undesirable productive patterns in real time, reacting autonomously (e.g. flag them for Quality Inspection).

                      PERFORMANCE CLASS

                      Clusterization and Classification of KPI scores (e.g. OEE, Net/Gross, MTBF, Starved/Blocked) for specific processes (e.g. a given TrackID’s production) or time frames (e.g. per shift).

                      The Clusterization engine provides an overall index of productive performance that can be correlated to previous classes to uncover patterns in how different process behavior profiles impact operational and financial outcome.

                      The Classification engine can detect fluctuations in performance in real time, alerting stakeholders if it declines or is insufficient to meet production goals.

                      CYBER SECURITY

                       

                      The Platform’s cyber security framework is compliant with the Commercial National Security Algorithm Suite (CNSA Suite): cryptographic algorithms specified by the National Institute of Standards and Technology (NIST) and used by NSA’s Information Assurance Directorate (IAD) in solutions approved for protecting National Security Systems (NSS).

                      They include cryptographic algorithms for encryption, key exchange, digital signature, and hashing. Since 2015, NSA started preliminary plans for transitioning to quantum resistant algorithms, as follows:

                        Algorithm Function Parameters
                        Advanced Encryption Standard (AES) Symmetric block cipher used for information protection Use 256 bit keys to protect up to TOP SECRET
                        ECDH Key Exchange Asymmetric algorithm used for key establishment Use Curve P-384 to protect up to TOP SECRET
                        Secure Hash Algorithm (SHA) Algorithm used for computing a condensed representation of information Use SHA-384 to protect up to TOP SECRET
                        RSA Asymmetric algorithm used for key establishment Minimum 3072-bit modulus to protect up to TOP SECRET
                        Asymmetric algorithm used for digital signatures Minimum 3072 bit-modulus to protect up to TOP SECRET

                        SOFTWARE AS A SERVICE (SaaS)

                         

                        The I2M Platform is commercialized in the Software as a Service (SaaS) business model, with a monthly subscription rather than onerous upfront capital investments. This allows clients to achieve faster breakeven and improved Net Present Value, avoiding the pitfalls of Total Cost of Ownership associated with traditional models (e.g. software depreciation, hardware, network, database, human capital etc).

                        Additionally, this business model provides a series of technical advantages for clients:

                        • Platform upgrades: new AI and Machine Learning algorithms, new features and OS compatibility;
                        • Data backup and automatic servers monitoring with mobile push-notifications and e-mail alerts;
                        • High degree of cyber security, with edge computing (hybrid cloud architecture) and enterprise-grade security frameworks;
                        • Technical support at business hours (standard) or full 24×7 (contractual addendum).

                         The Platform can also be commercialized as MCOTS (Modified Commercial Off-The-Shelf), with customized dashboards, custom industrial IoT interfaces and specialized algorithms (subject to technical analysis and contractual addendum).

                        1 + 6 =

                        +55 (12) 3945-1385

                        +55 (12) 3945-1391

                        +49 (030)7889-1931

                        Technological Park

                        500 Avenida Doutor Altino Bondensan

                        12247-016, São José dos Campos (SP) – Brazil

                        Tauentzienstraße 16,

                        10789, Berlin – Germany

                        DataBot Software Intelligence S/A

                        São José dos Campos’ Technological Park

                        DataBot Software Intelligence S/A

                        European Office

                        Technological Park

                        500 Avenida Doutor Altino Bondensan

                        12247-016, São José dos Campos (SP) – Brazil

                        DataBot Software Intelligence S/A

                        São José dos Campos’ Technological Park

                        Tauentzienstraße 16,

                        10789, Berlin – Germany

                        DataBot Software Intelligence S/A

                        European Office

                        The Platform uses Edge Computing (hybrid cloud architecture) to distribute analytics along the available computational resources, from shopfloor to enterprise cloud control (ISA-95 levels 1 through 4).

                        It deploys a series of latency-aware, closed-loop feedback control systems at various layers of the manufacturing process. Data is 

                        • Capture/generate, structure, communicate and persist all relevant data to describe the manufacturing process within that time frame.
                        • Deploy the Supervised and Unsupervised Machine Learning algorithms that fit within that time frame, autonomously applying analytical results.
                        • Autonomously apply analytical results to enforce pre-configured corporate and operational policy, either by alerting stakeholders (e.g. push notifications, automation alarms) or interfering on the physical process (e.g. setpoint adjustments, emergency stoppages).
                        • Provide data-driven actionable insights for decision support within that time frame, allowing stakeholders to optimize processes at all levels.

                        Cloud (IT)

                        (10~100 s latency)

                        Persists and structures operational shopfloor data across the value chain, consolidating it within the corporate context throught the Platform’s API (e.g. ERP, CRM, BI). […]

                        Use Advanced Analytics to uncover patterns, understanding the operational conditions that generate desirable process behavior and production outcomes. […]

                        Design operational and corporate policy and procedures that incentivizes operational conditions linked to desirable production outcomes. […]

                         

                        Provides data-driven actionable insights for corporate-wide decision support (e.g. standardized work orders, assertive CAPEX investments, supply chain transparency). […]

                        INDUSTRIAL
                        INTERNET

                        MACHINE
                        LEARNING

                        ACTIONABLE
                        INSIGHTS

                        Digitizes manufacturing within a structured, exportable dataset that tracks operations and describes the physical process over time, as well as its operational context. […]

                        Continuously monitor production with a suite of Unsupervised Machine Learning algorithms to detect abnormalities, autonomously applying analytical results. […]

                        Provides data-driven analytical results for decision support on the operational level (e.g. quality inspections, setpoint adjustments, maintenance planning). […] 

                        Shopfloor (IT/OT)

                        (10~100 ms latency)

                        The Platform uses Edge Computing (hybrid cloud architecture) to distribute analytics along the available computational resources, from shopfloor to enterprise cloud control (ISA-95 levels 1 through 4).

                        It deploys a series of latency-aware, closed-loop feedback control systems at various layers of the manufacturing process. Data is 

                        • Capture/generate, structure, communicate and persist all relevant data to describe the manufacturing process within that time frame.
                        • Deploy the Supervised and Unsupervised Machine Learning algorithms that fit within that time frame, autonomously applying analytical results.
                        • Autonomously apply analytical results to enforce pre-configured corporate and operational policy, either by alerting stakeholders (e.g. push notifications, automation alarms) or interfering on the physical process (e.g. setpoint adjustments, emergency stoppages).
                        • Provide data-driven actionable insights for decision support within that time frame, allowing stakeholders to optimize processes at all levels.

                        Capture/generate, structure, communicate and persist all relevant data to describe the manufacturing process within that time frame.

                        Deploy the Supervised and Unsupervised Machine Learning algorithms that fit within that time frame, autonomously applying analytical results.

                         

                        Autonomously apply analytical results to enforce pre-configured corporate and operational policy, either by alerting stakeholders or interfering on the physical process.

                        • Provide data-driven actionable insights for decision support within that time frame, allowing stakeholders to optimize processes at all levels.

                        INDUSTRIAL
                        INTERNET

                        MACHINE
                        LEARNING

                        ACTIONABLE
                        INSIGHTS

                        Digitizes manufacturing within a structured, exportable dataset that tracks operations and describes the physical process over time, as well as its operational context. […]

                        Continuously monitor production with a suite of Unsupervised Machine Learning algorithms to detect abnormalities, autonomously applying analytical results. […]

                        Provides data-driven analytical results for decision support on the operational level (e.g. quality inspections, setpoint adjustments, maintenance planning). […] 

                        Shopfloor (IT/OT)

                        (10~100 ms latency)