Using Smart Meter Data for consumer electricity usage forecasting

Electric energy consumption is essential for promoting economic development and raising the standard of living. In contrast to other energy sources, electric energy cannot be stored for large-scale consumption. From an economic viewpoint, the supply and demand of electric energy must be balanced at any given time. Therefore, a precise forecasting of electric energy consumption is very important for the economic operation of an electric power grid.

The ability to create a forecasting model for an individual consumer can help determine the overall load on the grid for a given time.

For this post, we will classify this as a time series prediction problem and only use one variance.  Check back for a post where introduce mutli variance.

The Dataset :

I used Smart Meters Texas or SMT portal to access my home electricity usage. SMTstores daily, monthly and even 15-minute intervals of energy data. Energy data is  recorded by digital electric meters (commonly known as smart meters), and provides secure access to the data for customers and authorized market participants

In addition to acting as an interface for access to smart meter data, SMT enables secure communications with the customers in-home devices and provides a convenient and easy-to-use process where customers can voluntarily authorize market participants, other than the customer’s retail electric provider or third parties, access to their energy information and in-home devices.

Tools :

Keras: Keras is an open source neural network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible

TensorFlow : TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks

The Algorithm: LSTM

In our case, we will be using a variant of Recurrent Neural Network ( RNN ) called Long Short Term Memory ( LSTM), why? time series problems are a difficult type of prediction modeling, LSTMs are good at extracting input features patterns when it’s over a long period of time, LSTM have the capability of retaining it in memory, and use it to predict next sequences, this what my 3 months usage (in KWH) look like ( by 15 min interval ) :

Screen Shot 2019-02-10 at 2.31.38 PM

for this example, I will only use a subset of the overall dataset, 3 days of electricity usage:


DATE,USAGE_KWH
11/01/18 00:15,0.005
11/01/18 00:30,0.005
11/01/18 00:45,0.013
11/01/18 01:00,0.029
11/01/18 01:15,0.025
11/01/18 01:30,0.004
11/01/18 01:45,0.005
11/01/18 02:00,0.004
11/01/18 02:15,0.024

Screen Shot 2019-02-10 at 2.44.35 PM

python :

loading the dataset:


# load the dataset

dataframe = read_csv('AMS.csv', usecols=[1], engine='python')

dataset = dataframe.values

dataset = dataset.astype('float32')

Training Set and Test Set :


train_size = int(len(dataset) * 0.67)

test_size = len(dataset) - train_size

train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]

creating model :

model = Sequential()</pre>
</div>
model.add(LSTM(4, input_shape=(1, look_back))) model.add(Dense(1)) model.add(Activation('tanh')) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, epochs=50, batch_size=1) 
making predictions:

trainPredict = model.predict(trainX)</pre>
</div>
testPredict = model.predict(testX) 

for plotting we shift the training and test sets :

# shift train predictions for plotting
trainPredictPlot = numpy.empty_like(dataset)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
# shift test predictions for plotting
testPredictPlot = numpy.empty_like(dataset)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict

if results were plotted, it would look something like the picture below:

Blue: actual usage

Orange: Test Set

Green: Predictions

Screen Shot 2019-02-11 at 8.27.33 AM

if we zoom on the prediction part :

Screen Shot 2019-02-11 at 8.34.54 AM

this is good forecasting, with a 0.19 RMSE

Summary :

With the electricity market undergoing a revolution, load forecasts have gained much more significance spreading across other business departments like energy trading and financial planning. Accurate load forecasts are the basis for most reliability organizations operations, like Electricity Reliability Consul Of Texas more commonly known as ERCOT. Accurate load forecasting will become even more important with Smart Grids. Smart Grids create the opportunity to proactively take action at the consumer level, storage level, and generation side, to avoid a situation of energy scarcity and/ or price surge.

My books recommendations for AI and ML

IMG_6567 copy

With AI and machine learning creeping in every industry, the paradigm of IT as we know it is also changing, ML is a natural extension of the software revolution we have seen in the last decades, knowing how to utilize ML in your industry will be a key element for success and growth in the coming years.

This transformation will need a new vision, as new jobs, new platforms and new ways of doing business will emerge from it. I believe at this point we are past the hype of AI and we are in the middle of a reality where machine learning and inference are helping thousands of businesses grow and prosper.

I have read several books on AI and ML, and the two that stands out are:

  • Human + Machine , reimagining work in the age of AI
  • Pragmatic AI : an introduction to Cloud-Based Machine learning.

either you are an engineer, a manager, an executive, or merely driven by curiosity about AI and ML, I recommend that you read these books to fully grasp its impact on many industries

Human + Machine, reimagining work in the age of AI

Paul R. Daugherty and H. James Wilson did an amazing job at reimaging what work will look like in the age of AI, they introduced the notion of the Missing Middle; a realistic approach of looking at this transformation by defining what machine can do, what humans can do, and  where humans and machines have hybrid activities.

Humans can judge, lead, empathize and create, machines can iterate, predict and adapt.

AI can give humans superpowers, but humans need to train, sustain machines, and at times explain its decisions.

Paul and James talk about an entirely new set of jobs that will emerge from this alliance.

Pragmatic AI : an introduction to Cloud-Based Machine learning

if you are an engineer who likes to understand how the training and the inference work under the hood, this book would be a great resource for you.

Pragmatic AI explains how you can utilize cloud resources in AWS Azure, and GCP to train your models, optimize them, and deploy a production scale machine learning powered application.

the book also contains real applications and code samples to help reproduce it on your own, and it covers the following topics :

  • AI and ML toolchain: from python ecosystem toolchains like numpy,  Jupiter Notebooks and others to the tools available on AWS GCP and Azure
  • DevOps practices to help you deliver and deploy
  • Creating practical AI applications from scratch
  • Optimization

 

there are definitely a lot of publications concerning AI and ML, but the combination of the two books above will cover the organizational and structural challenges that an organization will face when it comes to adopting AI, and also the technical backgrounds needed to work with it.

Storing your ML Models with parameters

Often when training machine learning models you find yourself creating different estimators and tuning this parameter or that to get the results you want, you may also find yourself wanting to save the results of those iterations, to save you time in the future.

that’s what I’m trying to address in this post, having some sort of artifact repository for machine learning models, but saving your parameters as metadata using the following design :

Screen Shot 2018-12-09 at 8.38.33 PM

1: user uploads artifacts using pre-signed s3 URLs

2  and 3: a putObject event triggers the lambda function to make an API call to an ec2 instance running an HTTP server  to read the estimator from S3 and get the parameters

4: saving the parameters in DynamoDB

 

Uploading artifacts :

I use AWS S3 to store the assets, making use of the pre-signed URL feature that gives you the possibility to use temporary URLs to upload files to S3, which takes away the managing permissions.

to orchestrate all this I like to use my favorite serverless framework.

here is the code on github

Deploying Serverless stack :

$serverless deploy

Screen Shot 2018-12-10 at 1.19.57 PM

this will create 5 endpoints :

  • POST /dev/asset
  • GET /dev/asset
  • PUT /dev/asset/{asset_id}
  • DELETE /dev/asset/{asset_id}

these endpoints will allow you to update/create/delete an artifact, in this case, is a model.

for more reading about this check out the readme page of this serverless example

Getting the parameters :

in this part, in the EC2 instance, we will try to download the model and get the parameters to store them in dynamo DB

Initially I thought I could leverage all of this work in Lambda, so I don’t have to create an EC2 instance just to read the parameters, unfortunately, there couple issues with that solution, one of them is the size of the dependencies once you add SKlearn libraries as a dependency, the size of the lambda zip reaches 60Mb. But once uploaded there was an issue running SKlearn part of the lambda, for this iteration I decided to use a t2.micro on EC2.

the EC2 has a python web server running that get requests with an asset_id, downloads the asset, get the parameters and store them in dynamodb

this is the code for the server :

https://github.com/mbenachour/store_ml_models/blob/master/server.py

Testing the upload :

to test all this I created a small python script:


import sys
import requests
from sklearn.externals import joblib

def upload(filename):
model = loadModel(filename)
print (model.get_params())
url = 'https://oo0cl2av91.execute-api.us-east-1.amazonaws.com/dev/asset'
response = requests.post(url)
print (response)
presigned = response.json().get('body').get('upload_url')
response = requests.put(presigned, data=open(filename).read())
print (response)

def loadModel(model_path):
download_path = model_path
#s3_client.download_file(BUCKET_NAME, model, '/tmp/model.pkl')
return joblib.load(download_path)

upload(sys.argv[1])

to run it use :

 python test.py  your_model.pkl 

if you look at your dynamodb table you will see that your model has a description :

Screen Shot 2018-12-10 at 11.40.30 PM

Querying Ercot public dataset using AWS Glue and Athena

ERCOT is an acronym for Electric Reliability Council of Texas, it manages the flow of electric power to more than 25 million Texas customers — representing about 90 percent of the state’s electric load. As the independent system operator for the region, ERCOT schedules power on an electric grid that connects more than 46,500 miles of transmission lines and 600+ generation units. It also performs financial settlement for the competitive wholesale bulk-power market and administers retail switching for 7 million premises in competitive choice areas.

ERCOT also offers an online and public dataset giving market participants information on a variety of topics related to the market of electricity in the state of Texas, which makes it a good candidate for AWS products: Glue and Athena.

Tools :

AWS Glue 

AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics

AWS Athena 

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run

Scraping Data :

the data on Ercot website is available as a collection of  .zip files, I used a python scraper from this github repository to only collect CSV files.

As an example, we will be collecting data about the total energy sold from this page

Using the previous tools the command would something like this :


python -m ercot.scraper "http://mis.ercot.com/misapp/GetReports.do?reportTypeId=12334&reportTitle=DAM%20Total%20Energy%20Sold&showHTMLView=&mimicKey"

the script will download the csv files and store them in a data folder :

Screen Shot 2018-11-11 at 9.07.37 PM

At this point, we transfer the data to S3 to be ready for AWS Glue, an optimization of this process could consist of creating a lambda function with a schedule to continuously upload  new datasets

Creating a Crawler 

you can add a Crawler in AWS Glue to be able to traverse datasets in S3 and create a table to be queried.

Screen Shot 2018-11-11 at 9.31.44 PM

 

At the end of its run, the crawler creates a table that contains records gathered from all the CSV files we downloaded from EROCT public dataset, in this instance the table is called: damtotqtyengysoldnp

Screen Shot 2018-11-11 at 9.26.13 PM

 

And now you can query Ahead! 

using AWS Athena, you can run different queries on the table we generated previously, here are some few examples :

Total energy sold by settlement point :

Screen Shot 2018-11-11 at 9.53.54 PM

 

Getting the hours of the day: 11/12/2018 with the Max  energy sold 

Screen Shot 2018-11-11 at 10.07.58 PM

 

 

How can AIOps help you prevent the next major incident.

What is it?

AIOps is a term that has been used in the last few years to describe the ability to drive intelligence from the day-to-day data that IT operations generate. The data source could vary from monitoring tools like SolarWinds to service desk tools like ServiceNow to automation tools like configuration management ( chef, puppet … ), or log search platforms like Splunk

Untitled Diagram (5)

One area where AIOps can be an asset to operation teams is incident predictability and remediation, there are others like storage and capacity management, resources utilization …

How can AIOPS help prevent the next outage :

the footprint of digital systems and businesses is increasing every day and so is the speed at which the data is produced.

For example, a Palo Alto firewall can produce up to 12 million events in one day, the manual correlation of data is nearly impossible, and that’s why we need an overview of the entire landscape of data produced by IT operations,  transformation of data to be able to serve as training and test sets for machine learning.

Starting from the promise that an incident is a result of a change ( voluntary or involuntary) to a configuration, a device, a network, or an application, all these changes if monitored and reported on correctly can help create a good context to understand the root-cause analysis of the incident.

You can create an ML model that will help you predict the next outage, notify operation teams, and help reduce the downtime.

Suppose that you transformed the input data that you gathered from all your sources, organized it into dataset like the one below and used a supervised learning process to create an ML model :

Screen Shot 2018-09-18 at 10.20.52 PM

your model will be able to make predictions of future incidents when fed with real-time input coming from your tools and logs :

Untitled Diagram (7)

over time, with more data, your model will get better at detecting future anomalies, with much more accuracy.

in conclusion

There is a lot of writing out there about AIOps, but the application, in my opinion, is a bit harder.

For different reasons, one being the spectrum of toolset in IT operations is very wide, and two being that the data structures are different from one organization to another, which means that trying to put a generic machine learning process to produce insights, will be at worst impossible and at best will lack accuracy.

For an organization to be able to get intelligent insights from  AIOps, there has to be an internal effort to train your models, because the quality of your future prediction of major incidents will essentially depend on the quality of your training and test sets.

 

 

 

Links :

https://blogs.gartner.com/andrew-lerner/2017/08/09/aiops-platforms/

https://www.ca.com/us/products/aiops.html

https://www.splunk.com/blog/2017/11/16/what-is-aiops-and-what-it-means-for-you.html

Deploying Apps and ML models on mesosphere DC/OS

Have you ever thought of your data centers and cloud infrastructure ( private and public ) as one big computer? where you can deploy your applications with a click of a button, without worrying too much about the underlying infrastructure? well … DCOS allows you to manage your infrastructure from a single point, offering you the possibility to run distributed applications, containers, services, jobs while maintaining a certain abstraction from the infrastructure layer, as long as it provides computing, storage, and networking capabilities.

After deploying my ML model on a kubernates Cluster, a lambda function, I will deploy it on a DCOS cluster.

what is DCOS:

DCOS is a datacenter operating system, DC/OS is itself a distributed system, a cluster manager, a container platform, and an operating system.

DC/OS Architecture Layers

DCOS manages the 3 layers of software, platform, and infrastructure.

the dashboard :

Screen Shot 2018-09-03 at 7.43.36 PM

the catalog:

DCOS UI offers a catalog of certified and community packages that the users can install in seconds , like kafka, spark, hadoop, MySQL ..

 

 

Deploying Apps and ML models on DCOS :

the application I’m deploying is a web server running the model I created in my previous posts to make predictions.

DCOS relies on an application definition file that looks like this :

app.json :

{
    "volumes": null,
    "id": "mlpregv3",
    "cmd": "python server.py",
    "instances": 1,
    "cpus": 1,
    "mem": 128,
    "disk": 0,
    "gpus": 0,

    "container": {
        "type": "DOCKER",
        "docker": {
            "image": "mbenachour/dcos-mlpreg:1",
            "forcePullImage": false,
            "privileged": false,
            "network": "HOST",
            "portMappings": [
                { "containerPort": 8088, "hostPort": 8088 }
            ]
        }
    }
}

 

the rest of the code can be found in my GitHub repo

after you configure your DCOS CLI and log in, you can run this command :

Screen Shot 2018-09-03 at 8.01.37 PM

if we take a look at the UI we can see that app/web server has been deployed :

Screen Shot 2018-09-03 at 8.03.35 PM

Deploy machine learning models on AWS lambda and serverless

in the last post, we talked about how to deploy a Machine learning trained model on Kubernates.

Here is another way of deploying ML models: AWS lambda + API gateway

Untitled Diagram (3)

Basically, your model (mlpreg.pkl) will be stored in S3, your lambda function will download the model use it to make predictions, another call will allow you to get the model hyperparameters, and sent it back to the user.

Screen Shot 2018-08-07 at 9.11.00 AM

to deploy AWS services, we will use a framework called Serverless

serverless allow you with a single configuration file to define functions, create resources, declare permissions, configure endpoints …

serverless uses one main config file and one or multiple code files :

  • handler.py : the lambda function
  • serverless.yml : serverless configuration file

here is what the serverless configuration file for this example would look like :


service: deploy-ml-service
plugins:
  - serverless-python-requirements
provider:
  name: aws
  runtime: python2.7
  iamRoleStatements:
      - Effect: Allow
        # Note: just for the demo, we are giving full access to s3
        Action:
          - s3:*
        Resource: "*"
functions:
  predict:
    handler: handler.predict
    events:
      - http:
          path: predict
          method: post
          cors: true
          integration: lambda
  getModelInfo:
    handler: handler.getModelInfo
    events:
      - http:
          path: params
          method: post
          cors: true
          integration: lambda

as described in the example we will create two functions one will make a prediction using the model we built in the last post, the other one will display the model hyperparameters :

  • predict
  • getModelInfo

to load the model we have  :

  • load_model : loading the stored model from S3

handler.py

from sklearn.externals import joblib
import boto3

BUCKET_NAME = 'asset-s3-uploader-02141'

def predict(event,context):
  input = event["body"]["input"]
  modelName = event["body"]["model_name"]
  data = float(input)
  return loadModel(modelName).predict(data)[0]

def loadModel(model):
  s3_client = boto3.client('s3')
  download_path = '/tmp/model.pkl'
  s3_client.download_file(BUCKET_NAME, model, '/tmp/model.pkl')
  return joblib.load(download_path)

def getModelInfo(event,context):
  model = event["body"]["model_name"]
  return loadModel(model).get_params()

$Serverless Deploy ! 

yep that’s all it takes, and your services will be deployed in seconds:

Screen Shot 2018-08-06 at 9.25.59 PM

Run the tests:

getting the model Hyperparameters :


root@58920085f9af:/tmp/deploy# curl -s -d "model_name=mlpreg.pkl" https://abcefgh123.execute-api.us-east-1.amazonaws.com/dev/params | python -m json.tool
{
"activation": "relu",
"alpha": 0.001,
"batch_size": "auto",
"beta_1": 0.9,
"beta_2": 0.999,
"early_stopping": false,
"epsilon": 1e-08,
"hidden_layer_sizes": [
1000
],
"learning_rate": "constant",
"learning_rate_init": 0.01,
"max_iter": 1000,
"momentum": 0.9,
"nesterovs_momentum": true,
"power_t": 0.5,
"random_state": 9,
"shuffle": true,
"solver": "adam",
"tol": 0.0001,
"validation_fraction": 0.1,
"verbose": false,
"warm_start": false
}

Making Predictions :


root@58920085f9af:/tmp/deploy# curl -s -d "input=1&model_name=mlpreg.pkl" https://abcdefg123.execute-api.us-east-1.amazonaws.com/dev/predict | python -m json.tool
0.13994134155335683

Automating the training and deployment of ML models on Kubernates

With the rise of Machine Learning and models, the need for automating and streamlining model deployment become a necessity. Pushed mostly by the fact that ML models as a new way of programming, are no longer an experimental concept but rather a day to day artifacts that can also follow a release and versioning process.

here is a link to the code used below: github

throughout this example I will :

  • train a model.
  • serialize it and save it.
  • build a docker image with front-end web server.
  • make a deployment on a kubernates cluster.

requirements:

scikit

docker

minikube & Kubernates

Building The Model

Training  data : 

our training data is going to be generated with a math function y= sin(2*π*tan(x)), where is between 0 and 1 with an increment of 0.001.

x = np.arange(0.0, 1, 0.001).reshape(-1, 1)

x = [[ 0. ]
[ 0.001]
[ 0.002]

………

[ 0.997]
[ 0.998]
[ 0.999]]

y = np.sin(2 * np.pi * np.tan(x).ravel()) #with max/min values of 1,-1

Screen Shot 2018-06-20 at 2.40.48 PM

Fitting the Model : 

In this example, I will use  a MultiLayer Perceptron implemented by SCIKIT python library

This is what the regressor function will all the parameters ( already tuned ) :

reg = MLPRegressor(hidden_layer_sizes=(500,), activation='relu', solver='adam', alpha=0.001,batch_size='auto',

learning_rate='constant', learning_rate_init=0.01, power_t=0.5, max_iter=1000, shuffle=True,

random_state=9, tol=0.0001, verbose=False, warm_start=False, momentum=0.9,

nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999,

epsilon=1e-08)
Test Data :
for testing we will use a generated set of data as well :
test_x = np.arange(0.0, 1, 0.05).reshape(-1, 1)
Prediction :
test_y = reg.predict(test_x)
Results :
continuous blue is the real output
dotted red is the predicted output
Screen Shot 2018-06-20 at 4.27.17 PM
Saving the model :
I used Python object serialization framework Pickle :
joblib.dump(reg, 'mlpreg.pkl')
this will save your model to a file named: mlpreg.pkl

Deploying the model

building an image :
I have created a docker image for deploying the model on a web server :
 FROM python:2.7.15-stretch

COPY MLPReg.py .

COPY server.py .

RUN python -m pip install --user numpy scipy matplotlib ipython jupyter pandas sympy nose

RUN python -m pip install -U scikit-learn

RUN python MLPReg.py

EXPOSE 8088

CMD python server.py

to build the image you can run this :
docker build -t mbenachour/mlpreg:latest .
kubernates deployment :
this the kubernates yaml file that describes the deployment :

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: mlpreg-deployment
  labels:
    app: mlpreg
spec:
  replicas: 3
  selector:
    matchLabels:
      app: mlpreg
  template:
    metadata:
      labels:
        app: mlpreg
    spec:
      terminationGracePeriodSeconds: 30
      containers:
      - name: mlpreg
        image: mbenachour/mlpreg:latest
        imagePullPolicy: "Always"
        ports:
        - containerPort: 8088
---
apiVersion: v1
kind: Service
metadata:
  name: mlpreg-svc
  labels:
    app: mlpreg
    #tier: frontend
spec:
  type: NodePort
  ports:
  - port: 8088
  selector:
    app: mlpreg
    #tier: frontend

you can deploy the kubernates model :
kubectl apply -f mlp.yml
to check on the status of your kubernates services :
kubectl get services
you should see something similar to this :
Screen Shot 2018-06-21 at 4.31.45 PM.png

Making predictions

to get the kubernates cluster IP address, in my case I’m using minikube the command line :

$minikube service mlpreg-svc --url

http://192.168.99.105:32397
to make a prediction using the API for an input of 0.1
Screen Shot 2018-06-21 at 5.33.13 PM

Pipeline.ai

a lot of products have been introduced to help solve this problem, one of them is Chris Fregly project: pipeline.ai
the project gives you the possibility to create-train-deploy models using different frameworks :
– tensorflow
– scikit
– pytorch
implementing a lot of ML most used algorithms like linear regression.

Create a free website or blog at WordPress.com.

Up ↑