Ciseco Visit

Earlier this week I visited Miles Hodkinson who runs Ciseco.

Ciseco supply us with the Open Kontrol Gateway web-connected base station, however Ciseco are best known for their range of wireless modules based on the Texas Instruments CC1110 chip. Most of Ciseco's products are manufactured by themselves in the UK. It was a fascinating visit to see the manufacturing process in action. I took a few photos which might be of interest to those of your who are interested in how things are made.

The surface mount electronics assembly process has got three main stages:

1. Apply solder paste on the PCB pads, this is done using a scraper and carefully aligned laser cut stencil.

 
Solder pate application

2. The pick and place machine places the SMT components in their correct locations and orientation on the board

Pick and place - the components are supplied on the white reels

3. After a quick visual check the board with the placed components is put in the reflow oven for a precise amount of time at a carefully monitored temperature profile. This melts the solderpaste and solders the components into place.


Reflow Oven
The finished boards are then taken away to be flashed, tested then finally the through-hole components such as the antenna socket are soldered on by hand.

The finished boards 
 View the full galley of photos including some videos of the pick and place in action:

Ciseco visit


Pre-assembled Nanode RF



We're happy to announce that we are now selling Nanode RF web-connected emonBase gateways pre-assembled. This makes it even easier and quicker to get started with energy monitoring: 


The pre-assembled Nanode RF using surface mount electronics has been designed and assembled in the US by Wicked Devices. The Nanode was originally designed by our friend Ken Boak. Ken has written  an interesting blog post giving a brief history of the Nanode.

Ken writes: 


Nanode was conceived back in the summer of 2010, when I returned from a trip to Shenzhen, China. I was between jobs, and had a bit of time on my hands for tinkering with some new product ideas.
Nanode arose out of the need to find a cheaper way of connecting simple open source hardware, such as Arduino to the internet, so that Arduino could be used for a range of sensing and control applications.

Early Nanode prototype controlling an RGB lamp
Read his full blog post here: http://sustburbia.blogspot.co.uk/2012/10/a-brief-history-of-nanode.html


Emoncms development update: Modules

I've been a little quiet over the last few weeks, emoncms blogposts and progress updates have been far and few between but that does not mean there has been no development progress happening, to the contrary there has been a lot of significant development work been going on.

I outlined at the start of the month in the october emoncms bug and dev list the idea of re-factoring/reorganising emoncms along more modular lines, pointing to some earlier thoughts on it. Since then I made the leap and started the re-factoring process, Ilde gave me a lot of help with splitting up the language files for translations, and good ideas for menu system implementation, Baptiste Gaultier  gave me some impetus to look at modularising the dashboard widget code and now we are almost there.

The old structure had a folder for Controllers, Models and Views, with all the different controllers, model and view scripts for a feature mixed in together in these folders and also some files such as setup.php which had code in it for all the different features. As the application was growing this was getting a bit unwieldy, harder to understand/read the code, add new features and keep track of development.

This is how the old structure looked like:

Controllers
  feed_controller.php (part of feed module * )
  input_controller.php (part of input module * )
  ...
Models
  feed_model.php (part of feed module *)
  input_model.php (part of input module *)
  ...
Views
  feed_list_view.php (part of feed module *)
  input_list_view.php (part of input module *)
 
setup.php (mixed code for input, feeds, etc) - if you add a new module you had to add db schema code in here.
notify.php (part of notify module but in the root emoncms directory)
* Module or feature specific files are all in different locations 

By changing the structure:

Modules
  feed
    feed_controller.php
    feed_model.php
    feed_list_view.php (includes no feed message)
    feed_schema.php
  input
    input_controller.php (merged with enum.php)
    input_model.php
    input_list_view.php (includes no input message)
    input_schema.php

and following the following design principles:

  • A module add's functionality in a self-contained way - not requiring (or minimizing as much as possible) modification of any other modules or the core framework.
  • A module can depend on another module. ie dashboards depend on feeds but feeds does not need to depend on dashboards.

I think this change gives us the following benefits:

  • Clearer application structure - easy to find module specific code
  • Adding a new module is really easy, you dont have to put the controller in the controller folder, the view in the views folder, model in the models folder, database setup in setup.php, all you have to do is download the module folder and place in the modules folder.
  • We can then use git to develop particular modules independently of the rest. A developer looking at commit logs therefore only needs to concern themselves with that particular module rather than the application as a whole.
  • Its easier to test modules and the framework independently of each other.
  • More developer autonomy, project self organisation. You can add a module to the optional modules list without having to coordinate closely with the main emoncms bundle.
The new modular emoncms version can be found here:

https://github.com/emoncms/emoncms

Given the potential escalating number of repositories as new modules are added I though it would be best to create a dedicated github organisation for emoncms hence the new location. If you go to: http://github.com/emoncms you will see quite a few repositories:


The emoncms repository is the main emoncms bundle which is build up from the emoncms_framework and the other core modules such as input, feed, dashboard and so on. Then there are repositories for the framework and modules independently of the main bundle including modules that are not in the main bundle such as the raspberry pi module which I will come back to in the next post.

The idea being that the main bundle could be built automatically from the module repositories and the framework and that the commit history could be on a module basis.

Adding a module
To add a particular non core feature such as the raspberrypi interface module to your emoncms installation its now just a matter of downloading the raspberrypi repository and placing it in the modules folder of your emoncms installation, much the same way as you add libraries to an arduino project.

Things to complete
As I mentioned at the start most of the refactoring work has been done but there are still some parts that are not complete such as the notify, admin and statistics module and there are some missing message boxes such as when no inputs, feeds or dashboards exist that need to be re-added.

The feed module implementation is also a little different in that the there is no longer a feed_relation table that used to link a user to their feeds, this is now handled in the feeds table which simplifies things but means a simple database migration process needs to take place if your upgrading from an old emoncms version, I will write a short script to do this soon and document the migration process.

Another important change is the API used to post data from basestations to emoncms what used to be api/post is now input/post as this action is handled by the input module. You can if you want sidestep input processing completely and just use the feed module which now has a full api. The api calls for both the feed and input module are documented in helper pages linked to from the top of the input and feed pages.

That's it for now, in the next few post I will discuss the raspberrypi module and also how to migrate from the old emoncms version.

Introducing the emonTx Shield

The OpenEnergyMonitor project started in 2009 playing about with an Arduino Duemilanove. Since then the project has very much built on the Arduino platform, both hardware and software. 

We love Arduino, it's fantastic how quick and easy Arduino makes it is to get started blinking an LED and or reading an analoge voltage level.

With that in mind it's with great pride that we are launching an Arduino compatible emonTx Energy Monitoring Shield to make getting started in energy monitoring just as easy. 


We are very pleased that the emonTx shield is compatible with the new Arduino Leonardo as well as the Uno and Duemilanove. 

The emonTx Shield has got four CT current sensor inputs, an AC voltage sample input, on-board temperature sensor and RFM12B wireless transceiver. With just a CT sensor the shield can monitor current and apparent power. With the addition of an AC-AC adapter to provide an AC voltage sample the shield can monitor real power, AC RMS voltage and power factor. 

The inclusion of the RFM12B wireless transceiver enables the emonTx shield to integrate with other OpenEnergyMonitor modules such an an emonGLCD display unit an emonBase web-connected base-station for logging an visualisation with emoncms

The emonTx shield is now available as a full kit (with or without the RF module) or PCB only in the OpenEnergyMonitor shop

As with all OpenEnergyMonitor hardware modules it's fully open-source, fully documented and has got plenty of Arduino example software sketches. Follow the links from the main documentation page: http://openenergymonitor.org/emon/emontxshield

The emonTx Shield is based on the popular emonTx a low power stand-alone energy monitoring unit. The stand-alone original emonTx is better suited to long term energy monitoring installations and/or low power operation. The original emonTx can also be bought in kit form from the OpenEnergyMonitor shop.    

If you have any questions or run into problems please post on our active community forums

Thank you to everyone who has helped contribute to the project.

Display live emoncms energy data on your desktop

Alexander Price ‏@saelaenx has documented a neat setup where he uses Geektool on his Mac to pull his live energy data and current temperatures from emoncms. The data it's self comes from an emonTx, emonGLCD posted online with a NanodeRF.

It would be great to have a similar arrangement for Ubuntu Linux using open-source tools, that way I can have have the same on my desktop :-)

The same goes for an android widget to display my live energy data on my home screen...one for the future.

Great work Alex, read his full blog here and see below for a summary:

Since I got a few people asking about this on twitter I decided to write a quick blog post on how I set up my current Geektool arrangement. Obviously this method only works on a mac, but I imagine a similar result could be achieved on Windows using Rainmeter.
GeekTool Desktop Widget 

emoncms dashboard

emonGLCD with Alex's nice custom monospaced fonts

Read his full blog post here: http://saelaenx.wordpress.com/2012/09/27/geektool-setup/

Logo Ideas

Recently our friend and freelance artist/graphic designer Gwil Noble has knocked up a few logo possibilities for OpenEnergyMonitor.

Gwil has also made some gorgeous little context icons we will be using in our new front page re-design soon.




For the main icon there are three possibilities so far, their all in prototype stage, there is a possibility that we end up using something totally different! If you have any creative ideas for 'D', 'E' or even 'F' we would love to be able to add more options to the list.

We have setup a poll to enable you to vote on which logo you prefer: http://openenergymonitor.org/emon/logovote

Forum discussion thread here: http://openenergymonitor.org/emon/node/1093

Data portability: Version 1 complete!

Following on from the last series of posts detailing an implementation for exporting and importing data into emoncms to make it possible to move feed data between emoncms instances with ease, these features are now fully integrated in to the latest emoncms version.

I have also created a rough initial interface for selecting the remote account to download from and for selecting the feeds to download, you can get to this page by clicking on the Sync feeds button on the feedlist page:

To select an account enter the account URL and write apikey and click ok, after a moment the feed list of the remote account will appear. 

You can then click on the feeds you wish to download. This will enter the feeds into a download queue. As detailed in the last post the download and import process is run by a separate script.  The script is called import.php and can be found in the root directory of the emoncms install. You can either run this script manually or set up a cron job to run it periodically.


Once you start downloading feeds, if you refresh the page you will see the remote ahead by value decrease as data is downloaded starting with the oldest data first. It is also possible to sync the data again at a later data to bring the local copy up to date.

This feature makes for a really convenient way to backup feed data, as you only need to download the latest changes rather than all the data every time.

There are still a few things missing in the current implementation, that would be great to add:

- The first is that histogram data is not downloaded correctly as the import and export script only works at present with one column of data.

- It would be great to have a feature to select all the remote feeds to download, rather than having to click each one.

- It would be nice to have the sync interface page built in javascript/jquery to have it automatically update on progress without having to refresh the page.

But I think other than these which can be fixed and improved in time, its largely functional and good to go.

Data portability: Importing feed data into emoncms

The last post detailed an implementation for exporting feed data from emoncms, this post explores an implementation for importing feed data, which can be used in conjunction with the export implementation to sync or download feed data from one emoncms instance to another enabling full feed data portability.

I started by trying to create an import script that was build into a normal page action, you would go to a page ie:

sync/feed?id=10...

The problem with this is that downloading and importing a feed takes a long time and so the page will appear unresponsive for that time, the loading animation just turns and turns.

If you mistakenly click on the refresh button, the initial process will not close and so you will start to get duplicate processes which will mess up the data and increase the server load.

Ideally while downloading and importing the data there would be a progress bar so that you can see what's happening and any other useful feedback if things go wrong.

To get this kind of feedback you need to be able to update the browser as the process is running.

You also want the syncing to happen directly between servers rather than server to local computer to server as this would be faster in instances where both servers are remote and the internet access to the local computer is relatively slow, - i.e you want the import process handled in php which is server side (you could use another server side language if you wanted).

The way to get feedback about what's happening while having the import process being handled in php is to have two separate scripts one that handles the importing and another that can be refreshed regularly serves the gui webpage.

We can extend the idea of a separate import script to sort the problem of duplicate processes too by implementing a download and import queue.

When a user clicks to download/sync a feed in the sync page, the action behind this enters the details of the feed to be downloaded in the queue, it can also check at this point that it does not already exist in the queue.

We can then run the import script independently of the gui page, such as via a scheduled cron job that runs the script one a minute. We can put a check in there to ensure that only one instance of the script is running at any given time too.

The import script then checks if there are feeds to be downloaded in the import queue, working through each queue item sequentially.

The added advantage of the queue implementation is that we can keep the load that importing places on the server low allowing only one feed to be downloaded and imported at any given time.

So all in all this implementation:
  • Enables feedback on import progress.
  • Ensures we don’t get duplicate feed syncing processes.
  • Ensures that the server load is kept under control.
On to the code, here's the import script that works through the import queue and downloads and imports each item sequentially, I would be interested to hear if there are any ways the efficiency of the export and import process can be improved.

<?php

// Ensure only one instance of the script can run at any one time.
$fp = fopen("importlock", "w");
if (! flock($fp, LOCK_EX | LOCK_NB)) { echo "Already running\n"; die; }

// Connect to the database
define('EMONCMS_EXEC', 1);
require "Includes/process_settings.php";
require "Includes/db.php";
switch(db_connect()) {
case 0: break;
case 1: break;
case 3: show_dbsettingserror_message(); die ;
}

// Fetch the import queue
$result = db_query("SELECT * FROM importqueue ORDER BY `queid` Asc");

// For each item in the queue
while($row = db_fetch_array($result))
{
$queid = $row['queid'];
$feedname = "feed_".trim($row['localfeedid'])."";

// Check if we have already downloaded part of the feed and get the last
// value entered so that we dont download and insert data that has already
// been inserted this makes this utility useful for syncing in general
// and in particlar backup that only downloads the latest changes.
$feed_result = db_query("SELECT * FROM $feedname ORDER BY time Desc LIMIT 1");
$feed_row = db_fetch_array($feed_result);
$start = 0; if ($feed_row[0]) $start = $feed_row[0];

// Open the file served from the export page on the remote server
$url = $row['baseurl'].'/feed/export?apikey='.$row['apikey'].'&id='
.$row['feedid'].'&start='.$start;

echo "Opening file $url\n";
$fh = @fopen( $url, 'r' );

// Read through the file
$i = 0;
while (($data = fgetcsv($fh, 0, ",")) !== FALSE)
{
$feedtime = $data[0]; $value = $data[1];

if ($feedtime!='' && $value!='')
{
$i++;
//Contruct values part of the query
if ($i!=1) $vals .= ',';
$vals .= "('$feedtime','$value')";

// Execute query every 400 rows (same block size as export script)
if ($i>400)
{
$i = 0;
if ($vals) db_query("INSERT INTO $feedname (`time`,`data`) VALUES ".$vals);
$vals = "";
}
}
}
// If there are lines to be inserted left over insert them here at the end
if ($vals) db_query("INSERT INTO $feedname (`time`,`data`) VALUES ".$vals);

fclose($fh);

echo "Transfer complete\n";
echo "Deleting item $queid from queue\n";
db_query("DELETE FROM importqueue WHERE queid = $queid");
}

?>

In the next post I will explain the interface for generating the feed import queue.

Data portability: exporting feed data from emoncms

Following on from the previous two posts:
Making emoncms data portable
Data portability challenges

Here's my initial solution for feed export code that also allows it's load on the server to be regulated.

In trying to work this out I first tired requesting all the feed data (which could be millions of rows) in one mysql query and then writing the data to a file in one push, Testing on my laptop server installation and monitoring the load with the linux top tool the mysql query can use around 97% of the cpu for up to tens of seconds depending on the feed size. Once the query is complete the writing of the file (an apache process) then takes up another block of 97% cpu use until the file has completed downloading. Emoncms still seems fairly responsive in that time.

My next thought was to break up that larger query into smaller blocks and introducing a delay between the reading of each block, maybe the delay could wait for other actions to finish before continuing in future versions.

Spiting the query into blocks of 400 rows, alternately loading and writing to the file without a delay drops the load to around 50% for both the mysql and apache processes.

By introducing a delay of about 80 microseconds its then possible to drop the load down to roughly 10% for the apache process and a little less for the mysql process.

Increasing the block size or/and reducing the delay speeds the transfer rate up, reducing the block size or/and increasing the delay slows the transfer rate down.

Any thoughts on improving this would be welcome, I will try and add it to the emoncms repo soon.

Here's the code:


<?php
  // Open database etc here
  // Extend timeout limit from 30s to 2mins
set_time_limit (120); // Feed id and start time of feed to export $feedid = 1; $start = 0; // Regulate mysql and apache load. $block_size = 400; $sleep = 80000; $feedname = "feed_".trim($feedid).""; $fileName = $feedname.'.csv'; // There is no need for the browser to cache the output header("Cache-Control: no-cache, no-store, must-revalidate"); // Tell the browser to handle output as a csv file to be downloaded header('Content-Description: File Transfer'); header("Content-type: text/csv"); header("Content-Disposition: attachment; filename={$fileName}"); header("Expires: 0"); header("Pragma: no-cache"); // Write to output stream $fh = @fopen( 'php://output', 'w' ); // Load new feed blocks until there is no more data $moredata_available = 1; while ($moredata_available) { // 1) Load a block $result = db_query("SELECT * FROM $feedname WHERE time>$start  
    ORDER BY time Asc Limit $block_size");

$moredata_available = 0;
while($row = db_fetch_array($result)) {

// Write block as csv to output stream
fputcsv($fh, array($row['time'],$row['data']));

// Set new start time so that we read the next block along
$start = $row['time'];
$moredata_available = 1;
}
// 2) Sleep for a bit
usleep($sleep);
}

fclose($fh);

In the next post I will discuss creating a script to import feed data by pointing directly at the export script on the other server, plus a background running daemon process and queueing mechanism to manage the import process.

Data portability challenges

Continuing from yesterday's post on making emoncms data portable.

There are some challenges with implementing data portability for energy data. 5s energy feed data can grow to be quite large, I have several feeds that are approaching 100Mb and a total account size of about 700Mb and thats only just approaching one year.

Moving large quantities of data uses server resources including: cpu, memory and bandwidth. Emoncms.org is hosted on a BigV virtual machine, where you get 1GiB Ram, 25GiB Disk space, 200GiB transfer for £10 a month, which is a fantastic deal for those specs.

As exporting or importing large feeds takes time its important that it does not hog resources on the server making the server unusable for other users, any import or export code needs to use a small percentage of the server resources and ideally the amount of resources used could be adjustable. We also need to keep an eye on the data transfer limits.

In the next post I will post up the initial working feed export code.