IoT Zone is brought to you in partnership with:

John Esposito edits Refcardz at DZone, while writing a dissertation on ancient Greek philosophy and raising two cats. In a previous life he was a database developer and network administrator. John is a DZone Zone Leader and has posted 333 posts at DZone. You can read more from them at their website. View Full User Profile

The Programming Challenges of IoT

08.14.2014
| 6079 views |
  • submit to reddit

 Pragmatic developers can look at the Internet of Things in two ways:

  1. This is amazing. I can only begin to imagine how I can directly improve the world outside the set of networked computer boxes.

  2. This is terrifying. If something goes wrong, then it’s on me—and this time the system affected extends outside the set of networked computer boxes.

IoT is amazing in the way it bridges physical and virtual environments, but even the phrase “Internet of Things” should give a developer pause. Computers are pretty smart. Things are stupid. IoT tries to put Things online and tries to make them into inter-networked computers.

That’s pop-philosophy, but you want to develop in the real world. So what real-world challenges will you face when you shoot for the IoT moon?


Two Types of Challenges

It seems there are two types of programming challenges for the Internet of Things:
  1. Data and control (the comp-sci and networking stuff)

  2. Information and business logic (the info-sci and human-computer interaction stuff)

For this article, we’re going to talk about the programming problems we can solve around IoT. We’ll start at the bottom (data and control) and work our way up to the big picture (information and business logic).

Type 1: Data and Control

Challenge 1.1: Power

This one is pretty obvious. Many IoT devices are wireless, and no one has invented thumbnail fusion reactors yet. One solution is equally obvious: pick your algorithms carefully. If you can save cycles to perform a given task, then do it. Libraries for implementing power-optimized algorithms will presumably spring up in greater numbers, but even so, you may need to inject some heavy-duty comp-sci know-how into IoT app development.

The second solution is more complex than the first. Higher-level developers will have to think more about Dynamic Power Management (DPM), which just means: shutting down devices when they don’t need to be on and starting them up when they do. Normally the operating system worries about this, but an IoT application that orchestrates wearables and phones, for example, will know things that each device’s OS won’t—and therefore will be able to switch things on and off more intelligently than each device’s individual OS. Another option is to write or customize an embedded OS.

Challenge 1.2: Latency

Latency on IoT sits in two places: at the source and in the pipes. The basic problem is a physical one. Thing-chips often have to be small, which means that the chip can only be as powerful as current transistor technology allows. Another problem is power. Many small devices transmit and receive data in discrete active/sleep cycles (think TDMA) in order to save bandwidth and power, but this increases latency inversely to power saved.

Another tradeoff is that network topologies optimized for IoT can involve more hops over slower devices. Mesh networks, for example, are immune to the failure of a few nodes. Similarly, “fog” and “edge” computing paradigms relieve Internet infrastructure by doing as much as possible without hub-nodes. The downside is that each node (a) can’t do very much on its own and (b) can only talk to neighboring nodes.

The problem in the pipes is a matter of network infrastructure. Simply: the more Things, the less available bandwidth. Infrastructure technology will get faster, but cell networks won’t catch up overnight. And Things, unlike fancier computers, are often supposed to transmit blindly—that is, without anyone necessarily asking them to. This means there’s a massive potential for wasted bandwidth.

Challenge 1.3: Unreliability

The third challenge flows from the first two. Devices are unreliable–“Things” even more so. The distributed and decentralized virtues of IoT bring their own reliability problems. Here are just a few:

  • Ubiquitous devices are cheap, so they fail more often.

  • Truly ad-hoc connectivity implies ephemeral SLA, so uptime and recovery time may be unclear.

  • Loosely controlled devices may have better things to do than give you their data (or computing resources), so concurrency may grow very complex.

  • Less-reliable hardware generates less-reliable information (‘does my outlying datapoint just signify device failure?’), so you may need to chew your data more thoroughly at the application level.

In a sense, IoT decouples low-level (the sub-session layer) from high-level channel capacity, because the distribution of error-sources on IoT is more heavily weighted toward originating or remote nodes. This means more error-correcting for application developers.

Type 2: Information and Business Logic

Challenge 2.1: Vast & Thin Data

Sensors on smartphones are already generating oceans of raw data. These sensors are pretty sophisticated. Every major mobile OS provides a unified, simple API to access clean sensor and geo data. But start grabbing this data and it’s not immediately clear what to do with it. Try to think of killer applications for barometric data—besides weather and elevation (with GPS)—off the top of your head. Raw sensor data is extremely thin. It doesn’t explain itself, and we haven’t yet produced a complete mapping from physical measurements to business logic—let alone software design.

Even if you know what to do with sensor/geo data eventually, you may have to learn new algorithms and data structures to process immediately. Geo-graphs aren’t CS101 graph data structures (for one thing, edge length is a first-class citizen of geo-graphs).

The size of data over IoT is itself a problem. Wireless sensors beget tons of data. All the problems (and opportunities) of Big Data cascade naturally from IoT. Massively distributed computing on IoT devices is an exciting thought, but the toolchain for splitting calculations over a thousand idle Fitbits just isn’t here yet.

2. Context-Sensitivity

Consider the term “ubiquitous computing,” defined as: what happens when wirelessly connected sensors and actuators, placed more or less everywhere, allow software to interact with much larger swaths of the physical world than just hardware or bare metal. Put ubiquitous computing on the Internet, and IoT makes the software context much larger. This has implications at two basic levels.

At a high computer-architectural level: IoT extends the concept of computing environment well outside the von Neumann machine and weakens the concept of peripheral I/O. In an IoT-world interface, sensors are input and actuators are output. As IoT devices process increasingly at the edge (within individual nodes), the devices that appear peripheral to other nodes are actually doing an awful lot of computation.

At a high business-logic level: the more stuff outside the computer-box affects the program, the less predictable the program behavior becomes at runtime. The same bizarrely-birthed memory leak might slow down the UI in a smartphone context but contribute to a cascading electrical grid failure in an IoT context. This means that IoT demands more self-monitoring and self-repairing code.

Two Types of Solutions

Plenty of researchers are working on ambitious solutions to the programming challenges presented by IoT. Two of the more exciting examples include:

There are also a few more strategies you can use right now to solve some of the IoT programming challenges mentioned above.

Reactive ProgrammingThis general purpose paradigm responds to all major application-level challenges and embraces opportunities presented by IoT. The four definitive attributes of a reactive application are: event-driven, scalable, resilient, and responsive [3]. These four are excellent guiding principles for IoT applications at a high, cross-stack level.

Flow-based Programming and the Actor ModelBoth present strongly independent components where only messages can affect processes. Both are deeply amenable to concurrency (for example, shared state is discouraged), nondeterminism, and scaling without exponential complexity growth, because components are black boxes. FBP is a bit more pragmatic and restrictive while the actor model is less restrictive and a bit harder to implement. FBP has already been implemented in Javascript (NoFlo), and the actor model has been implemented in Java (Akka) [4][5][6].

What’s important to remember is that there are already tools and techniques that can help you build IoT applications. FBP, actors, and reactive programming all have key attributes for creating applications that leverage the strengths of IoT to overcome its challenges.


[1] https://www.usenix.org/legacy/event/mobisys05/eesr05/tech/full_papers/bakshi/bakshi.pdf

[2] http://isr.uci.edu/tech_reports/UCI-ISR-10-3.pdf

[3] http://www.reactivemanifesto.org/

[4] http://jpaulmorrison.com/fbp/

[5] http://arxiv.org/ftp/arxiv/papers/1008/1008.1459.pdf

[6] http://noflojs.org/

[7] http://akka.io/

2014 Guide to Internet of Things

The 2014 Guide to Internet of Things covers 39 different IoT SDKs, developer programs, and hardware options, plus:

  • Key findings from our survey of over 2,000 developers
  • "How to IoT Your Life: The Complete Shopping List"
  • "The Scale of IoT" Infographic
  • Glossary of common IoT terms
  • Four in-depth articles from industry experts

DOWNLOAD NOW