The Internet of Things, short “IoT”, belongs, without a doubt, to the trend topics (not to say hype topics) of the present time. Who doesn’t know the studies predicting the explosion of internet-connected devices in the near future (cf. e.g.  or )?
But is there really something to these statements? Is there a new multi-billion dollar market just waiting around the corner? Or is it just another overhyped buzzword topic of which there are currently so many?
This question and others will be addressed in this post. In the first step, we will establish a general definition of the term IoT. Then, after defining the concept of innovation, the current development regarding IoT is placed in its historical context. From this we will deduce why the Internet of Things is actually an innovation. Afterwards, the different aspects belonging to the field “IoT” are elaborated and the problems addressed by them are discussed. We will conclude this post with an outlook on potential future developments in this domain.
General description of IoT
In the most general sense, IoT describes the basic (bidirectional) interaction between a connected real world interface and an arbitrary logical entity — e.g. a component of a traditional IT infrastructure — or another real-world interface.
It’s a common misunderstanding that the term IoT is restricted to hardware devices with low computing power like e.g. microcontroller units (MCUs) or single-board computers like the Raspberry Pi. Although this impression can easily be implied by their strong dissemination in high-profile maker projects (e.g. through blog posts, social media, etc.), they still only make up for a fraction of the entirety, comparable to how deep learning is a part of AI yet AI is not solely deep learning.
Instead, IoT in its core comprises the whole field layer (from the German term “Feldebene”; the entirety of devices interacting with the (real world) environment). Besides the aforementioned low-power devices (incl. traditional connected sensors/actuators), this furthermore includes e.g. voice assistants such as Amazon Echo, household objects such as a “smart” (whatever that may mean) freezer or – in a more industrial context – even connected programmable logic controllers (PLCs) and (partly) robots.
This description may be interpreted as a general definition of the term IoT. In order to be able to answer the questions formulated beforehand, however, this alone is not sufficient. Therefore it is necessary to dive deeper into the topic and to look at the underlying concepts.
The innovation cycle
To understand the concepts underlying IoT, it is first necessary to understand the evolutionary technical process preceding it. For this, a general definition of the concept of innovation is required in the first place.
In contradiction to the generally widespread interpretation, innovation does not mean to create something completely new from the ground up. A historical view on humankind’s great technological achievements shows that it is rather a process of recombining existing knowledge and/or resources with slight modifications to fit an existing problem at the right time. The latter is an important, often neglected factor, as it is paramount for predicting if a novelty is going to disrupt existing structures or if it is, figuaratively speaking, going to sink into obscurity as the world isn’t ready for it (yet). In general, there are several implications that have to be met (at least partly) for an innovation to develop clout, amongst others maturity of the technological field, accessibility of the technology itself and the size of the sphere of influence of the problem that can be solved by this novelty. The first two mainly influence the general prevalence resp. acceptance of the innovation, the latter is critical for the disruptiveness of the respective development.
A good example to further illustrate this coherence is provided in form of the so-called “Industrial Revolutions”. If you look at the past developments since the mid-18th century when the first of these shifts took place, one can find the same repeating pattern whenever a disrupting technological breakthrough was achieved.
It always starts with technological advancement in a (new) field beyond the industrial mainstream. The domain stays very much self-contained until it advances past a certain level, usually the moment when it becomes ready for usage in an industrial context (maturity) or when its complexity deceeds a certain level so that the usage doesn’t require specific expert knowledge anymore (accessibility). At this point, other industries start to become aware of the existence of the novelty and start to examine their own field for problems that could be solved by this or for new business models that become possible. In that case, the new branch is either assimilated or merged into the value chain. And if the resonance across the field as well as the impact this has on the further development is high enough (problem’s sphere of influence), one starts to speak of a “revolution”. As a result, many new fields and possibilities open up and a period of enormous technological advancement begins. This is mostly accompanied by a social disruption as a reaction to the novelties and their influences on daily life.
After the era of fast development comes a technological recession when the novelties start to settle down. This could also be called the integration phase because this is the time when all the initial works and concepts from before are fine-tuned and integrated across the board. It’s usually this stage when new ideas and concepts, enabled and catalyzed by the advancement from the previous revolution, begin to emerge in the shadows. The cycle starts anew.
This innovation cycle can be observed at many points throughout history. It began with the mechanization that enabled industrial use of steam machines in the mid- to late-18th century. As a consequence, especially the metal industry advanced enormously which then again facilitated the uprising of the railroad and electrical industries that lived an existence in the shadows (resp. in theory and R&D) up until then. This led to the second revolution in the late 19th century which brought upon the beginnings of what is know as globalization today as well as the electrification of social and productive organization. And lastly, there’s the most “recent” third industrial revolution in the mid 1900s that marked the shift from mechanical and analogue to digital electronics and that was a (more or less) direct result of the uprising of the semiconductor industry and thus of the preceding revolution. The most fitting description for this time would probably the “era of digitization and automation“.
IoT & Innovation
To summarize the above, so far a general definition of the concept of innovation has been established, the differences between normal and disruptive innovations have been analyzed and some examples in the form of the so called “Industrial Revolutions” have been discussed.
Looking at the outcome of the last revolution, what one will find is that there were basically two major sectors afterwards that overshadowed the rest and that determined the direction of further technological progress: the digital and the (consumer) electronics industry. Yet, it should be noted that the success of the latter was at least partly chained to that of the former as it provided the interface to the content that was produced by the software developers. So it’s probably okay to say that it was mainly the IT industry that was the driving force in the late 20th century.
For quite some time, the software branch developed on its own. The ambitions to fully exploit the potential of the internet predominated pretty much everything else. Additionally, hardware was rather expensive in the beginning, so it didn’t seem very profitable to make use of it any more than absolutely necessary. The result were effects like the dot-com bubble in the early 2000s as well as a complete redistribution of power among the big players with the uprising of companies like Yahoo, Google or MySpace and later Facebook, Amazon & co.
But at the same time, the electronics industry didn’t just sit still. While the software branch fought over the supremacy over the internet, the exponentially increasing demand for computers, smartphones, etc. led to a constant advancement in this sector. Quantities increased while size and prize of the components dropped dramatically over the years. Figuratively speaking: The same (or even more) computing power that cost thousands of dollars about two or three decades ago is now available for under 10$ in form of e.g. an Arduino or a Raspberry Pi Zero. At the same time, connectivity in general increased massively as well as a direct necessity for the users to be able to access the content developed by the emerging software industry.
Now, with regard to the above definition of the term innovation, one can say that today humankind has arrived at a point in development where the requirements of maturity and accessibility are definitively satisfied. Computing power has become generally available (e.g. in the cloud), hardware costs have declined massively and a huge software ecosystem with prefabricated solutions for almost any common use case has been established in the meantime. By combining these individual pieces, it becomes possible to deploy many of the technological advancements from the software industry that humanity has become affiliated with over the last ~30 years from private computers and smartphones, etc. to devices, machines, buildings, everything and connect them among each other or with the internet. This alone already qualifies this newly emerging development, also known as IoT, as an innovation with a high probability of prevailability.
Even so, what’s left to discuss is the problem space that is solved by this development resp. the need that is covered by it as this is significant for the clout and thus the disruptiveness of an innovation. This is going to be the topic of the second part.