Fast Healthcare Interoperability Resources (FHIR, pronounced “fire”) is a standard describing data formats and elements (known as “resources”) and an application programming interface (API) for exchanging electronic health records (EHR). The standard was created by the Health Level Seven International (HL7) health-care standards organization.
FHIR is organized by resources (e.g., patient, observation).Such resources can be specified further by defining FHIR profiles (for example, binding to a specific terminology). A collection of profiles can be published as an implementation guide (IG), such as The U.S. Core Data for Interoperability
Because FHIR is implemented on top of the HTTPS (HTTP Secure) protocol, FHIR resources can be retrieved and parsed by analytics platforms for real-time data gathering. In this concept, healthcare organizations would be able to gather real-time data from specified resource models. FHIR resources can be streamed to a data store where they can be correlated with other informatics data. Potential use cases include epidemic tracking, prescription drug fraud, adverse drug interaction warnings, and the reduction of emergency room wait times.
Here we will try ti use Github Actions to automate the ingestion from Synthea using Azure DevOps
Part 2: in a future blog post we will look more into the data and do some ML on data
Bulk FHIR in ndjson format (set exporter.fhir.bulk_data = true to activate)
C-CDA (set exporter.ccda.export = true to activate)
CSV (set exporter.csv.export = true to activate)
CPCDS (set exporter.cpcds.export = true to activate)
Rendering Rules and Disease Modules with Graphviz
Azure FHIR API
Azure Healthcare APIs provides pipelines that help you manage protected health information (PHI) data at scale. Rapidly exchange data and run new applications with APIs for health data standards including Fast Healthcare Interoperability Resources (FHIR) and Digital Imaging Communications in Medicine (DICOM). Ingest, standardize, and transform data with easy-to-deploy tools and connectors for device and unstructured data. Expand discovery of insights by connecting to tools for visualizations, machine learning (ML), and AI.
ML model deployment in production is still an area that lacks conformity, nomenclature, and patterns. Aside from few technology companies who started the journey early on, for late adopters of ML practices, it’s pretty much the wild west when it comes to the standards of model deployment.
Some other challenges include the culture of data science teams inside the organization, productionizing the process of model release, data scientists by nature comes from an academic and research background, and tend to focus more on perfecting the quality of predictions and classifications by running different experiments, tracking them, and lowering cost functions, etc .., while the data engineering realm tend to focus on streamlining the process of delivery of models to production. Here is a great article by Assaf Pinhasi about the cultural gap in data science teams.
The most common practice I have seen in different projects and organizations tend to be :
To run all the experiments needed for data explorations, model tuning, tools or platforms like Jupyter Notebook, Databricks notebooks and others are used by data scientists, once the model is trained, turned with the right parameters, and saved into artifact ( like a pickle file ), the code get committed to a git repository, and the work of data engineering begins.
from that point on, a data engineer needs to :
Build a data pipeline by creating the training and automation scripts ( train.py and predict.py)
Design a deployment strategy like a micro-services architecture with different services : inference, data preparation…
Build a CI/CD pipeline with the right ML-driven automation tests
Design model monitoring to capture concept drift
Size the right hardware needed to run the inference and the training
The boundaries of this collaboration between data science and data engineering often feel blurry, leaving room for a lot of “who is supposed to do what”, and in most cases requires data engineers to spend time understanding the steps followed.
The new approach
The idea behind this post is to showcase an example of streamlined ML deployment and training using combination of two ML frameworks: Kedro and cortex.dev with a minimum amount of code
What is Kedro ? Kedro is an open-source Python framework for creating reproducible, maintainable and modular data science code. It borrows concepts from software engineering and applies them to machine-learning code; applied concepts include modularity, separation of concerns and versioning.
What is Cortex? Cortex is an open source platform for large-scale inference workloads, it has the fololwing capabilities : Supports deploying TensorFlow, PyTorch, and other models as realtime or batch APIs. Ensures high availability with availability zones and automated instance restarts. Runs inference on on-demand instances or spot instances with on-demand backups. Autoscales to handle production workloads with support for overprovisioning.
the combination of both tools allow us to implement a new approach that will :
Shift the data pipeline build to the data science side (kedro)
Parametrize model training by externalizing inputs like : split the train/test data, algorithms used, learning rates, epochs ( kedro )
Introduce the notion of nodes and pipelines and data catalogues (kedro)
Standardize outputs/inputs of nodes and persist results (kedro)
Guarantee scalability and repeatability : it’s easy to reuse nodes and pipelines for new data sources to create models specific to a similar business unit (kedro)
Design and build the inference infrastructure (cortex)
Flexibility to create batch or realtimeAPI endpoints ( cortex )
ease the management of dependencies ( cortex & kedro )
the new architecture would look something like this :
One of Kedro’s features is to break the ML steps into nodes and pipelines, generally there is a
Data Engineering pipeline : for data processing, feature extraction, normalization, encoding …
Data Science pipeline : Splitting the data to training and test sets, the designing the model(s), evaluation
Cortex on the other hand, takes care of creating and deploying the model generated by Kedro pipeline using a cortex operator, it create a kubernetes cluster in either AWS or GCP using a simple infrastructure description file :
it also creates a load balancer to distribute the inference across all the cluster nodes, and an API gateway to serve the API responses
Example : New York Taxi Trip Duration
I will be using a kaggle dataset that has the following data structure :
we aim to create a model that will predict the trip duration based on the other input features like pick up date, pick up location ( longitude latitude) , drop off location etc …
Considering that we have sizable amount of training data ( 1458645 rows ) and there is no sparsity in the data, or no dimension reduction needed I will use lightgbm for this proof of concept.
Creating Nodes and Pipelines
Nodes are the building blocks of pipelines and represent tasks. Pipelines are used to combine nodes to build workflows, which range from simple machine learning workflows to end-to-end production workflows.
in our case a node will represent tasks like :
feature extraction : hour of the day, day of the month, month of the year
Split data : train and test sets
trainmodel : training using lightgbm
evaluate model : calculating metrics
Pipeline organises the dependencies and execution order of your collection of nodes, and connects inputs and outputs while keeping your code modular. The pipeline determines the node execution order by resolving dependencies and does not necessarily run the nodes in the order in which they are passed in.
To benefit from Kedro’s automatic dependency resolution, you can chain your nodes into a pipeline, which is a list of nodes that use a shared set of variables.
Here is the code to both pipelines , to run the project , you can change directory to ny_cab_trip_duration_kedro_training, and run the command :
ny_cab_trip_duration_kedro_training$ Kedro run
2021-01-19 19:35:43,309 - kedro.io.data_catalog - INFO - Loading data from `trips_train` (CSVDataSet)...
2021-01-19 19:35:46,316 - kedro.pipeline.node - INFO - Running node: extract_features: extract_features([trips_train]) -> [extract_features]
2021-01-19 19:36:24,834 - numexpr.utils - INFO - Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
2021-01-19 19:36:24,834 - numexpr.utils - INFO - NumExpr defaulting to 8 threads.
2021-01-19 19:39:27,635 - kedro.io.data_catalog - INFO - Saving data
This will run all nodes described above and generate the model file :
Model Deployment With Cortex
The amount of code needed for deployment is really minimal with cortex which makes automation and streamlining deployment extremely easy, in few steps we can have APIs deployed with latest version of the model , here are the steps :
Build a cloud deployment cluster
Deploy the model
Build cloud kubernetes cluster:
$cortex cluster up -c basic-cluster.yaml --aws-key AKIA3KH6IPR6WOSYNHVU --aws-secret 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
￮ creating a new s3 bucket: cortex-6a2d11117c ✓
￮ creating a new cloudwatch log group: cortex ✓
￮ creating cloudwatch dashboard: cortex ✓
￮ creating api gateway: cortex ✓
￮ spinning up the cluster (this will take about 15 minutes) ...
At the end of the execution :
[✔] EKS cluster "cortex" in "us-east-1" region is ready
￮ updating cluster configuration ✓
￮ configuring networking (this might take a few minutes) ✓
￮ configuring autoscaling ✓
￮ configuring logging ✓
￮ configuring metrics ✓
￮ starting operator ✓
￮ waiting for load balancers ............................................................................ ✓
￮ downloading docker images ✓
cortex is ready!
api load balancer: a8e87f75709de4e96bbc3871b8ef9ceb-a6ec41dfe22e0c12.elb.us-east-1.amazonaws.com
api gateway: https://g06o0hssmj.execute-api.us-east-1.amazonaws.com
Note : I have copied the model created by Kedro pipeline to S3 under s3://cortex-6a2d11117c/tmp/
cortex-trip-estimator$ cortex deploy trip_estimator.yaml
using aws environment
updating trip-estimator (RealtimeAPI)
cortex get (show api statuses)
cortex get trip-estimator (show api info)
cortex logs trip-estimator (stream api logs)
To make sure the API is deployed :
env realtime api status up-to-date requested last update avg request 2XX
aws trip-estimator live 1 1 12m16s - -
cortex-trip-estimator$ python consume.py
Trip duration is : 5.252568735167378
With the growing number of platforms, tools, and frameworks that facilitate the deployment of machine learning, we will eventually get to a point where we have defined patterns, and standards.
The foundation of successful ML projects is having data scientists and data engineers speak the same language, by defining pipelines, tasks, inputs, outputs, at that point it becomes easy to streamline and automate the delivery.
A modern power plants generate an enormous amount of data, with the ever growing number of high frequency sensors, some of the applications of this data are : the protection against problems induced by combustion dynamics for example, and help improve the plant heat rate, and optimize power generation.
If we take a sensor that monitors the combustion process at a frequency of 25kHz it could generate up to 10Gig bytes of data a day, scale that to hundreds ( sometimes thousands ) and you will end up with something in the order of terabytes a day.
In the energy industry, independent power producers with big fleet of generation units are gathering massive quantities of data from their plants, whether it’s about reporting and dashboarding, or making recommendation in realtime to improve the generation, or simply alerting about a potential issue, realtime processing of streaming data is and should be an integral part of this data pipeline.
Kappa vs Lambda Architecture :
generally there are two approaches to realtime streaming, lambda and kappa architectures
lambda architecture :
Lambda architecture is a way of processing massive quantities of data (i.e. “Big Data”) that provides access to batch-processing and stream-processing methods with a hybrid approach. Lambda architecture is used to solve the problem of computing arbitrary functions. The lambda architecture itself is composed of 3 layers : Batch, Speed, and Serving.
batch layer manages historical data and handles all the computation and transformations on the data including machine learning inference
the speed layer is supposed to handle all low-latency and realtime queries and
Kappa Architecture :
The Kappa Architecture is used for processing streaming data. The main premise behind the Kappa Architecture is that you can perform both real-time and batch processing, especially for analytics, with a single technology stack. It is based on a streaming architecture in which an incoming series of data is first stored in a messaging engine like Apache Kafka. From there, a stream processing engine will read the data and transform it into an analyzable format, and then store it into an analytics database for end users to query.
Both architectures are also useful for addressing “human fault tolerance,” in which problems with the processing code (either bugs or just known limitations) can be overcome by updating the code and running it again on the historical data. The main difference with the Kappa Architecture is that all data is treated as if it were a stream, so the stream processing engine acts as the sole data transformation engine.
if you want to take a look at the code first, here are the links to the notebook and kafka producer code.
After downloading the data, we select the features columns, we create an assembler Vector to gather all the features, then split the data to test and train, create Linear Regression – using featuresCOl and LabelCol
after the model is trained, we measure Model metrics ( MAE. RMSE, R2 )
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.regression import LinearRegression
#After downloading the data, we select the features columns
feature_columns = df.columns[:-1]
# Create an assembler Vector
assembler = VectorAssembler(inputCols=feature_columns,outputCol="features")
df2 = assembler.transform(df)
df_train = df2.select("features","PE")
# Split the data to test and train
train, test = df_train.randomSplit([0.7, 0.3])
# Create Linear Regression - using featuresCOl and LabelCol
lr = LinearRegression(featuresCol="features", labelCol="PE")
model = lr.fit(train)
#Measure Model metrics
evaluation_summary = model.evaluate(test)
Prediction on test set :
the transform method creates an extra column called prediction.
Predictions on streaming data
Power plants like combined cycle gas plants or coal or nuclear tend to generate an enormous amount of sensor data.
The Lambda architecture approach is to collect, process, clean the data and store it in a datalake waiting for the next batch to run the inference on it, the second approach (kappa architecture) is to run the model on the data while as it’s streaming. In this example we will use the following these main frameworks to run realtime predictions :
Kafka : a data streaming framework allowing producers to write the data upstream and consumer to read downstream
Spark SQL : Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as a distributed SQL query engine.
Spark Structured Streaming : scalable fault tolerant streaming processing engine built on top of SparkSQL provides the possibility to run transformation and ML models on streaming data
Here is the code to initiate a read stream from a kafka streaming cluster running on Host : plc-4nyp6.us-east-1.aws.confluent.cloud:9092
the ReadStream() method create a dynamic dataframe, that will hold values captured from kafka stream over time.
this will read stream of input values as a string :
at this point we need to split the string to an array of strings based on the comma delimiter , and cast the list as Array<double>
Since model expect a vector of values to run the transformation ( prediction ) , I created a UDF (User Defined Function) to convert list of doubles to a vector using Vectors.dense() method
#cast string inouts as an array of doubles
values = dfs2.select(
# create UDF to convert features list to
conv_vec = udf(lambda vs: Vectors.dense([float(i) for i in vs]), VectorUDT())
now that we have stream input data as a vector we can run predictions :
in the energy industry, forecasting the grid load is vital for various commercial optimizations around Day Ahead and Real Time trading, but it also help Independent Power Providers (IPPs) allocate the right generation unites.
we talked previously about energy markets, CAISO, PJM, ERCOT , and others. In this article the goal is not to talk about the accuracy of the model in predicting load, but to highlight AWS SageMaker way of deploying ML models .
The Dataset :
You can use PJM tool dataminer to extract the load , dataminer is PJM’s enhanced data management tool, giving members and non-members easier, faster and more reliable access to public data formerly posted on pjm.com.
XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
XGBoost algorithms are supervised machine learning that has a classifier and a regressor implementation
in this instance we will use xgboost regressor for predictions :
SageMaker allow you to validate your endpoint , here a code snippet for that:
file_name = 'test_point.csv'
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
result = response['Body'].read().decode('ascii')
Notebook : you can find the notebook for this article on github
AWS Sagemaker has built-in algorithms for both supervised and unsupervised machine learning models. it provides a great platform for training, and deploying machine learning models into a production environment on AWS. By combining this powerful platform with the serverless capabilities of Amazon Simple Storage Service (S3), Amazon API Gateway, and AWS Lambda, it’s possible to transform an Amazon SageMaker endpoint into a web application that accepts new input data, potentially from a variety of sources, and presents the resulting inferences to an end user.
Most of the nation’s wholesale electricity sales happen in a competitive market managed by Independent System Operator (ISO), with over 200 million customers in these areas and over $120 billion in annual energy transactions taking place. Under the Federal Power Act, these markets are overseen by the Federal Energy Regulatory Commission (FERC), which ultimately determines the guidelines for how wholesale electricity is bought and sold in the marketplace. RTOs/ISOs create the market rules that enforce whether and how energy resources can compete. Wholesale markets should allow all resources to compete on price and performance, as the Federal Power Act requires that the rates, terms, and conditions of service governing wholesale competitive markets be “just and reasonable”.
Predicting Energy Prices
In a competitive market, being able to predict energy prices does give IPPs a great advantage in formulating their bids for the day ahead market.
The DA is a purely financial market, which means even financial institutions who are not a power producer can hedge and make profit from buying/selling bulk energy in DA market.
Energy Price Forecasting ( EPF ) has become very instrumental in the decision making process for day to day energy bids, but also for creating a point of view (POV) for long term investments
There multiple models and methods used in creating price forecast for electricity :
Multi-agent model : build prices forecasting based on matching demand to supply, by simulation the operations of heterogeneous system of generation units and companies
Fundamentals model : which focuses on simulation the physical and economic relationship influencing the trading of electricity such as : weather, fleet conditions, load ..
Statistical model : uses mathematical regression models which is basically a combination of previous day/month/year prices and other input variables like weather or load
Computational intelegence model : this class of models uses mainly deep neural network, or support victor machine methods to predict prices , these methods are good at covering the non-linear aspect of a price curve, from a price spike point of view.
Hybrid model: which is a combination of two or more of the above models.
We will use the Computation intelligence model based on Fritz Arnold proposal for this explanatory exercise.
In this paper, the author explores different approaches to create an accurate prediction of energy wholesale prices, one of them which will make the subject of this article is to use purely time series data from previous years to learn about long (one year ) and short ( one day ) term patterns of prices
Historical hourly Day-Ahead market clearing prices for the German bidding zone.
We will use Amazon SageMaker as our data science and machine learning platform :
Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.
Traditional ML development is a complex, expensive, iterative process made even harder because there are no integrated tools for the entire machine learning workflow. You need to stitch together tools and workflows, which is time-consuming and error-prone. SageMaker solves this challenge by providing all of the components used for machine learning in a single toolset so models get to production faster with much less effort and at lower cost.
Platform Architecture :
SageMaker Studio unifies at last all the tools needed for ML development. Developers can write code, track experiments, visualize data, and perform debugging and monitoring all within a single, integrated visual interface, which significantly boosts developer productivity.
loading the data: we read data input from csv file, which consists of historical prices
Visualizing historical: 10 years od DA prices
Model Architecture :
This model is hybrid and will use a combination of one convolutional neural network, with a kernel and a stride of 24 corresponding to the number of hours in the day, and a Long Short Term Memory deep neural network, to help the model learn about price pattern over long and short periods of time.
For one year :
For 2 weeks :
Deploying the Model:
Once the model is trained and saved, we can use AWS endpoints to deploy as an API.
API backed deployment basically wraps the model in a web application to make available for inferences.
A EPF computational intelligence model that is based only on timeseries analysis does provide good results when it comes to learning the general patterns of prices, seasonality of surges, quarterly, and monthly patterns as well, weekday vs weekend, but it doesn’t perform well when it comes daily price surges which are critical to energy traders, as these impulses are not very frequent but when they happen they offer a great revenue opportunity for the Real Time market.
The rest of the project does provide a solution to help mitigate this issue, which consists of using MLP (MultiLayer perceptron ) to be able to feed the model different inputs like weather, load, etc … and predict prices. The accurracy of the model improves when it comes to predicting price spikes, but as stated above, the computational intelligence models are not probabilistic in nature.
Graph databases are NoSQL databases which use the graph data model comprised of vertices, which is an entity such as a person, place, object or relevant piece of data and edges, which represent the relationship between two nodes.
Graph databases are particularly helpful because they highlight the links and relationships between relevant data similarly to how we do so ourselves.
this is an example of a graph :
In the graph data model above we can see the common entities: User, Movie, Genre
the common relationships are: Rates, Follows, Has
Modeling Local Marginal Prices (LMPs) and Congestion Revenue Rights (CRR) :
What are LMPs?
LMPs represents the cost to buy and sell power at different locations within wholesale electricity markets, usually called Independent System Operators (ISOs). Examples of ISOs include ERCOT, PJM, ISONE, MISO, CAISO, and NYISO. LMPs are made up of three components, Energy Price, Congestion Cost, and Losses. Most ISOs have Day Ahead and Real Time LMPs. Day-ahead LMPs represent prices in day-ahead markets which let market participants buy and sell wholesale electricity a day before the operating day to avoid volatility. Real-time LMPs represent prices in real time markets which let participants buy and sell power during the day of operation. As a simplified example, let’s say you lived in a neighborhood and at noon today you expected you would have 100 MWs of electricity demand. Yesterday you would have bought 100 MWs of electricity to be delivered at 12 today on the day-ahead market. However when 12 today rolls around, demand is actually 105 MWs, you would buy the additional 5 MWs on the real-time market. Real-time market prices are generally more volatile than day-ahead market prices.
This is a simplified scenario where we have 2 generators and one consumer, first generators make a 10MW for the price of $4 per MWh, the second makes 20 MW for the price of $2 per MWh, the lines able to transmit 10MW to consumer premises.
no congestion exists and Generator 2 has the lowest offer and can serve the entire load. Since LMP is calculated as the cost of the next MW needed and generator 2 can supply the next MW at $2, the LMP is $2/MWh. Because there’s no congestion on the network, the cheaper LMP of $2/MWh applies to all the nodes, and the 10 MW load costs $20.
In this case, the LMP will be $2/MWh and the payments/charges would look something like this :
As we can see generator 2 was able to provide the entirety of the power needed, and LMP is $2 in all three buses ( without accounting for line losses ).
In the next scenario we will introduce the concept of congestion:
The power line from Generator2 has a limit of 4 MW, meaning that maximum power that generator 2 can put on the grid is 4MW, the remaining ( 6MW ) that the consumer needs will be provided by Generator1, this will influence the LMPs in all three buses
congestion exists and Generator 2 is not fully utilized; there’s a constraint from Generator 2 and it can only serve 4 MWs. Now using MWs from the lowest priced generator cannot serve the entire load; 6 MWs must come from elsewhere, such as Generator 1.
The LMP at Generator 2 is still $2/MWh, so for the 4MWs it supplies, $8 will be paid to Generator 2. Generator 1 will supply 6 MW at $4/MWh, so it should receive a payment of $24. Because congestion exists between Generator 2 and the load, the two LMPs are different, and the more expensive generation cost of $4/MWh is used when charging the load for the 10MW; $40 is owed for the 10 MWs it used for an hour.
In this the following example I have tried to model both scenarios we showed in the example above.
There is 3 type of nodes:
Generator: a generation resource in the grid, it can be a power plant, wind or solar farm.
Consumer: a consumer entity ( residential, commercial …)
Connection: represent a connection between power lines on the grid
The relationships basically represent the capacity of the transmission line, in this case, 4MW and 10MW
Electric energy consumption is essential for promoting economic development and raising the standard of living. In contrast to other energy sources, electric energy cannot be stored for large-scale consumption. From an economic viewpoint, the supply and demand of electric energy must be balanced at any given time. Therefore, a precise forecasting of electric energy consumption is very important for the economic operation of an electric power grid.
The ability to create a forecasting model for an individual consumer can help determine the overall load on the grid for a given time.
For this post, we will classify this as a time series prediction problem and only use one variance. Check back for a post where introduce mutli variance.
The Dataset :
I used Smart Meters Texas or SMT portal to access my home electricity usage. SMTstores daily, monthly and even 15-minute intervals of energy data. Energy data is recorded by digital electric meters (commonly known as smart meters), and provides secure access to the data for customers and authorized market participants
In addition to acting as an interface for access to smart meter data, SMT enables secure communications with the customers in-home devices and provides a convenient and easy-to-use process where customers can voluntarily authorize market participants, other than the customer’s retail electric provider or third parties, access to their energy information and in-home devices.
Keras: Keras is an open source neural network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible
TensorFlow : TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks
The Algorithm: LSTM
In our case, we will be using a variant of Recurrent Neural Network ( RNN ) called Long Short Term Memory ( LSTM), why? time series problems are a difficult type of prediction modeling, LSTMs are good at extracting input features patterns when it’s over a long period of time, LSTM have the capability of retaining it in memory, and use it to predict next sequences, this what my 3 months usage (in KWH) look like ( by 15 min interval ) :
for this example, I will only use a subset of the overall dataset, 3 days of electricity usage:
if results were plotted, it would look something like the picture below:
Blue: actual usage
Orange: Test Set
if we zoom on the prediction part :
this is good forecasting, with a 0.19 RMSE
With the electricity market undergoing a revolution, load forecasts have gained much more significance spreading across other business departments like energy trading and financial planning. Accurate load forecasts are the basis for most reliability organizations operations, like Electricity Reliability Consul Of Texas more commonly known as ERCOT. Accurate load forecasting will become even more important with Smart Grids. Smart Grids create the opportunity to proactively take action at the consumer level, storage level, and generation side, to avoid a situation of energy scarcity and/ or price surge.
With AI and machine learning creeping in every industry, the paradigm of IT as we know it is also changing, ML is a natural extension of the software revolution we have seen in the last decades, knowing how to utilize ML in your industry will be a key element for success and growth in the coming years.
This transformation will need a new vision, as new jobs, new platforms and new ways of doing business will emerge from it. I believe at this point we are past the hype of AI and we are in the middle of a reality where machine learning and inference are helping thousands of businesses grow and prosper.
I have read several books on AI and ML, and the two that stands out are:
Human + Machine , reimagining work in the age of AI
Pragmatic AI : an introduction to Cloud-Based Machine learning.
either you are an engineer, a manager, an executive, or merely driven by curiosity about AI and ML, I recommend that you read these books to fully grasp its impact on many industries
Human + Machine, reimagining work in the age of AI
Paul R. Daugherty and H. James Wilson did an amazing job at reimaging what work will look like in the age of AI, they introduced the notion of the Missing Middle; a realistic approach of looking at this transformation by defining what machine can do, what humans can do, and where humans and machines have hybrid activities.
Humans can judge, lead, empathize and create, machines can iterate, predict and adapt.
AI can give humans superpowers, but humans need to train, sustain machines, and at times explain its decisions.
Paul and James talk about an entirely new set of jobs that will emerge from this alliance.
Pragmatic AI : an introduction to Cloud-Based Machine learning
if you are an engineer who likes to understand how the training and the inference work under the hood, this book would be a great resource for you.
Pragmatic AI explains how you can utilize cloud resources in AWS Azure, and GCP to train your models, optimize them, and deploy a production scale machine learning powered application.
the book also contains real applications and code samples to help reproduce it on your own, and it covers the following topics :
AI and ML toolchain: from python ecosystem toolchains like numpy, Jupiter Notebooks and others to the tools available on AWS GCP and Azure
DevOps practices to help you deliver and deploy
Creating practical AI applications from scratch
there are definitely a lot of publications concerning AI and ML, but the combination of the two books above will cover the organizational and structural challenges that an organization will face when it comes to adopting AI, and also the technical backgrounds needed to work with it.
Often when training machine learning models you find yourself creating different estimators and tuning this parameter or that to get the results you want, you may also find yourself wanting to save the results of those iterations, to save you time in the future.
that’s what I’m trying to address in this post, having some sort of artifact repository for machine learning models, but saving your parameters as metadata using the following design :
1: user uploads artifacts using pre-signed s3 URLs
2 and 3: a putObject event triggers the lambda function to make an API call to an ec2 instance running an HTTP server to read the estimator from S3 and get the parameters
4: saving the parameters in DynamoDB
Uploading artifacts :
I use AWS S3 to store the assets, making use of the pre-signed URL feature that gives you the possibility to use temporary URLs to upload files to S3, which takes away the managing permissions.
to orchestrate all this I like to use my favorite serverless framework.
these endpoints will allow you to update/create/delete an artifact, in this case, is a model.
for more reading about this check out the readme page of this serverless example
Getting the parameters :
in this part, in the EC2 instance, we will try to download the model and get the parameters to store them in dynamo DB
Initially I thought I could leverage all of this work in Lambda, so I don’t have to create an EC2 instance just to read the parameters, unfortunately, there couple issues with that solution, one of them is the size of the dependencies once you add SKlearn libraries as a dependency, the size of the lambda zip reaches 60Mb. But once uploaded there was an issue running SKlearn part of the lambda, for this iteration I decided to use a t2.micro on EC2.
the EC2 has a python web server running that get requests with an asset_id, downloads the asset, get the parameters and store them in dynamodb
ERCOT is an acronym for Electric Reliability Council of Texas, it manages the flow of electric power to more than 25 million Texas customers — representing about 90 percent of the state’s electric load. As the independent system operator for the region, ERCOT schedules power on an electric grid that connects more than 46,500 miles of transmission lines and 600+ generation units. It also performs financial settlement for the competitive wholesale bulk-power market and administers retail switching for 7 million premises in competitive choice areas.
ERCOT also offers an online and public dataset giving market participants information on a variety of topics related to the market of electricity in the state of Texas, which makes it a good candidate for AWS products: Glue and Athena.
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run
Scraping Data :
the data on Ercot website is available as a collection of .zip files, I used a python scraper from this github repository to only collect CSV files.
As an example, we will be collecting data about the total energy sold from this page
Using the previous tools the command would something like this :
the script will download the csv files and store them in a data folder :
At this point, we transfer the data to S3 to be ready for AWS Glue, an optimization of this process could consist of creating a lambda function with a schedule to continuously upload new datasets
Creating a Crawler
you can add a Crawler in AWS Glue to be able to traverse datasets in S3 and create a table to be queried.
At the end of its run, the crawler creates a table that contains records gathered from all the CSV files we downloaded from EROCT public dataset, in this instance the table is called: damtotqtyengysoldnp
And now you can query Ahead!
using AWS Athena, you can run different queries on the table we generated previously, here are some few examples :
Total energy sold by settlement point :
Getting the hours of the day: 11/12/2018 with the Max energy sold