Technology

Why Engineers Love the Smart City Works Actuator

So now it’s real! A fantastic Ribbon-cutting and Meet the Cohort event last Friday the 14th for the new Smart City Works Actuator at CIT, next door to our enormo2017-04-14 - SCW - 39 - DSC_9505usly successful and now four-year old cybersecurity accelerator, MACH37 (who also graciously hosted the event). The Governor came to get the 100 or so guests pumped up and glad to be Virginians. Thomas Smith, the Executive Director of the American Society of Civil Engineers spoke about our failing infrastructure and how the Smart City Actuator could play a role 2017-04-14 - SCW - 37 - DSC_9387in helping renew it. There actually was a ribbon, and the Governor was decisive in cutting it (look at the lever arms on those scissors!). And, in addition to civil engineers, we had electrical, mechanical, 2017-04-14 - SCW - 03 - DSC_9517transportation, and an aerospace engineer, computer scientists and data scientists, a materials scientist or two (graphene of course), and probably more. So why do all sorts of engineers love the Smart City Works Actuator? We can turn to the Laws of Physics for answers. Two laws that every engineer learns apply here:

F=ma, where a of course is acceleration,

and the formula for Kinetic energy (energy in action)

Kε=½m(v)**2

Now for our purposes we will let m represent the size of the mentor network, and v represent the volume of innovative companies the accelerator capacity can handle. By starting the Smart City Works Actuator, a has now become 2a, m has become 2m, and v is of course 2v. Substituting in our equations, and letting F represent the amount of fun we are having, any engineer can tell you the results:

2a*2m = 4F …four times the fun!

and

½[2m](2v)**2= 4Kε …four times the energy!!

Yes, its true. Engineers love the Smart City Works Actuator because, together with our MACH37 Accelerator, they can come and have four times the fun and experience four times the energy, all while helping build a better world. Q.E.D.

Of course the way we help Actuate a better world is by helping accelerate our innovative entrepreneurs, and the Smart City Works Actuator has some great ones!

IHT.   You no longer need to be a scientist to know whether your water is 2017-04-14 - SCW - 20 - DSC_9590safe. Using a patented new technology, Integrated Health Technologies’ Sensor BottleTM detects and relays water quality information to your phone to provide you with real-time peace-of-mind that the water you consume is safe to drink.  For cities, these bottles provide a crowd-sourced platform for real-time water quality detection and monitoring of municipal water systems.

UNOMICEDGE.  UNOMICEDGE is a Software Defined Network solution for securely 2017-04-14 - SCW - 51 - DSC_9580connecting the Cloud to devices at Network Edge.  It includes a network Hypervisor that not only enforces network security policies, but develops critical business and operational insights from user and device interactions. Smart cities rely on smart IoT devices at the Network Edge.  UnomicEdge not only reduces the cyber risk of IoT, but can provide valuable intelligence to make businesses and cities run smarter.

InfraccessInfraccess is powering up infrastructure investment by pr2017-04-14 - SCW - 24 - DSC_9570oviding easier access to trusted data so you can more efficiently discover investment opportunities, make quicker, better informed investments, and reduce overall investment risk. The Infraccess web-based workflow platform sources and transforms unstructured information into smart data and proprietary performance indicators to help unlock billions in investment opportunities in infrastructure.

Capital Construction Solutions.  Capital Construction Solutions creates mobile-based 2017-04-14 - SCW - 30 - DSC_9533risk management platforms for improving enterprise-wide accountability and transparency.  With Capital Construction Solutions deployed in the field, companies can immediately turn day-to-day operations into opportunities to reduce corporate liability, mitigate risk, and significantly increase profits.

 

 

PLANITIMPACT.   Design decisions can have significant and long-lasting2017-04-14 - SCW - 27 - DSC_9545 impact on business and environmental costs.  PlanITimpact has created a smart modeling platform to help building professionals better understand and improve performance, including energy, water use, stormwater and transportation, so owners, investors, and communities can better visualize project impacts and returns on investment.

GREATER PLACES.  Cities worldwide are investing in the next generation of buildings, infrastructure, transportation, and technology. But where can you turn to for readily finding the b2017-04-14 - SCW - 28 - DSC_9542est leading-edge solutions in this space?   GreaterPlaces creates a single web-based and mobile platform for bringing together the best ideas, inspirations, and practices for designing and governing cities—a marketplace and tools to connect people seeking ideas, products and services to transform cities worldwide.

Come join them and see what you’re missing!

 

All photos courtesy of Dan Woolley

 

Advertisements

How Do We Know the Actuator is Working? Part 3 – Corporate/University Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

Parts 1 and 2 of this series looked at methods for measuring the success of commercial and government programs for accelerating innovation. The other major sources of innovation in the U.S. economy are Corporate R&D efforts and University programs. How effective are these sources for driving innovation? It is hard to generalize, given the very large number of both Corporate efforts and University programs and the paucity of data, although the anecdotal data is mixed to negative.

It does seem as though our model of funding leverage and transition probabilities could be adapted to measure success here as well. For this analysis however, alternate data sources provide some quantitative clues that support the anecdotal data. On the Corporate side there are certainly large company innovation success stories. Apple for example has certainly had success with their early mass market personal computer, photo_mac84pand more recently with their mobile devices (iPhone, iPad). One of the more amazing companies in this regard has to be General Electric who, over the last 50 years (based on where their profits originate), has gone from a heavy equipment manufacturer to a financial services company to the current big data company that also happens to be one of the world’s leading manufacturers…and always as a market leader.

But the anecdotal and other data tell a different story. Many accelerator organizations around the country conduct essentially consulting engagements with large Corporations to help them engage tFailureshe innovation ecosystem. Data show that 86% of the original 1955 Fortune 500 companies no longer exist, and that 50% of the Fortune 500 in the year 2000 have also failed. The average life expectancy of all companies is only 12.5 years, and the failure rate seems to be increasing. According to Forbes, the reason “Why U.S. Firms Are Dying: Failure to Innovate

One solution might be for Corporations to invest more in R&D. Alas. <re/code> reports on a clever study from Bernstein Research analyst Toni Sacconaghi that looked at “historical R&D spending figures as a percentage of sales for publicly traded tech companies with market caps of $2 billion”Large Cap Stocks, then tracked their stock performance 5 years later when presumably the research might have paid off. The chart summarizes his data, and concludes that there is “no meaningful relationship between R&D spending and stock performance”. I had a hard time deciphering the chart, so decided to put it in a more graphical form, and insert a few trend lines. The results were actually worse than SaccoCorporate Optimum R&amp;Dnaghi was willing to conclude. Not only is the regression line negative…the more you spend the worse your results…the standard deviation decreases with spending, meaning that the likelihood of a poor result is much higher the more you spend. Looking at this chart as a non-financial person, it is not clear to me why anybody would invest in companies spending more than 20% of sales on R&D. As a side note, something really interesting seems to be happening around a magical number of 7-8% R&D spending, but I haven’t a clue what that might be…ideas welcome!

How about Universities? Many of them have Tech Transfer offices for their research-derived intellectual property (IP), and again there are certainly success stories in MIT/Harvard, Carnegie-Mellon, Stanford and others. Anecdotally however these offices, many based on a traditional licensing model, have overall not been incredibly successful. Two reasons are often given: first, IP transfers much more effectively via people than via paper so licensing models without researcher support are not effective; and, second, some Tech Transfer offices like to value the IP on the basis of research dollars spent, not on market value. As one of my colleagues put it “when I have these discussions the Tech Transfer Office will often tell me the value of the IP is $10M or something; I tell them it’s worth zero unless somebody takes it to market, and we negotiate from there.” Fortunately some of the more forward-looking Universities are starting to migrate towards a shared risk/reward model where the value of IP is set by marketplace outcomes.

Is there an explanation for this phenomenon? One possible answer lies in where the funding for research comes from, and where it goes. Again an obscure chart, this one from NSF.Funding sources After staring at this one for a while what I really wanted to know was net funding flows. For example in Basic Research the Federal Government funds about 60% and only conducts about 10%…where does the other 50% go? Mostly to Universities, which fund 10% and use more than 50% of the Basic Research dollars. OK, easy enough to turn into a graph, where the nodes are the sources, scaled to size by percentage Research fundingof funds sourced, and the graph edges are the net flows, also scaled to size. By applying some awesome powerpoint skills and a garish color scheme, the Basic Research picture looks like this. Half of all research dollars go from the Federal Government to University-based research, with a small additional amount funded by Industry, totaling about 2/3 of all research performed.

Now applying the same analysis to the Development portions of the NSF chart yields the following.Development funding Here, almost ninety percent of all Product Development activity is funded and performed by Industry, with some support from the Federal Government, while Universities are almost invisible. No wonder there is a bit of a disconnect; Universities apparently are not at all focused on commercializing their research, if the funding numbers are to be believed. One last chart provides the summary. Looking at the per capita number of startups based on University-developed IP the numbers have been dropping for a while. More to the point, the numbers are low. For Virginia, for example, these numbers equate to about 20 startups per year. Our cybersecurity accelerator, MACH37, evaluates more than 100 companper capita unievrsity startupsies per year to select a dozen participants just in the area of cybersecurity. Numbers are similar for most venture investors, with only single digit percentages of the number of deals reviewed actually resulting in investment. Thus for Virginia this may equate to one or two investable companies per year, based on University-generated IP. To be fair, this probably underestimates the amount of entrepreneurial activity generated by University programs and proximity, but is probably reasonably accurate in terms of the Institutional success, based at least on the anecdotal evidence.

It is clear now, having looked at innovation and technology transfer across commercial accelerators, the Federal Government, Corporations and Universities that successful innovation is one of those “wicked problems.” While there are successes across each of these domains, there are no magic bullets, no guaranteed approaches for innovation. So, how do we know our Smart City Actuator is working? And, are there ways to make this entire national research enterprise more efficient? We will explore those questions in Part 4 of this series.

Next (Thursday 3/23): How do we know the Actuator is Working? Part 4 – Synthesis and Policy

How Do We Know the Actuator is Working? Part 2 – Government Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

In the last post we looked at commercial accelerator/investment programs and presented a methodology and results allowing more or less direct comparison of outcomes for these programs. The original study was funded by the Federal Government however to look at how these commercial approaches compared to government innovation programs.

slide3

There are a number of well-known innovation activities across the Federal Government, including the Small Business Administration’s SBIR program (Small Business Innovation Research…check out CIT’s own Robert Brooke as an SBIR 2016 Tibbetts Award Winner!), the Defense Advanced Research Projects Agency (DARPA) and its programs, In-Q-Tel which is the intelligence community program that uses commercial-style practices to encourage government-relevant innovation, and others in the various military services, national laboratories, contractor activities and elsewhere.

Access to data on success rates and outcomes is as fragmented in the government domain as it is in the commercial world, in some cases compounded by classified work or other policies specifically designed to limit information access. But as in the commercial world, the best-known and most successful Government programs are not shy about sharing their results. Special commendation goes to the SBIR program and In-Q-Tel, both of whose data sets have proven invaluable.

The SBIR program identifies three phases of development, with Phase I generally less than 6 months and $150,000, Phase II generally less than $1,000,000 and 2 years, and Phase III externally (non-SBIR) funded, providing a reasonable basis of comparison in our model. The SBIR program publishes good data on the percentage of proposals and the amounts awarded at each Phase, allowing for a robust analysis, although the Government Accountability Office (GAO) did find that the data are insufficient to determine DoD SBIR transition success from Phase II to Phase III. One additional twist is that the Navy instituted a program from 2000 to 2015 called the Transition Assistance Program (TAP) to provide mentoring support to these early stage researchers, and that data is also available in at least one study looking at the period from 2005 to 2008.

DARPA was a bit of a surprise. When the GAO tried to assess the transition performance of DARPA projects they concluded that “inconsistencies in how the agency defines and assesses its transition outcomes preclude GAO from reliably reporting on transition performance across DARPA’s portfolio of 150 programs that were successfully completed between fiscal years 2010 and 2014.” Another study puts DARPA success rate at about 3-5 products to market per year over 40 years, which the authors characterize as “quite impressive.” Measuring transition success is clearly not a priority.

In-Q-Tel data were a bit harder to come by, but here we were able to use two sources: their published number on downstream funding leverage, and a calculated number based on external data about the size of the In-Q-Tel portfolio and additional published fundiReference model fullng events. Thus we were able to calculate a performance number and compare it to the published number, again as a check on the validity of the model. All of these results are shown in the Figure. The In-Q-Tel (IQT) data show reasonable correlation between published and calculated numbers depending where IQT falls on the investment spectrum, and also shows that the best Government programs perform in line with the best of the commercial programs.

What about the rest? A couple things seem clear. First, there is much more emphasis on activity than on outcomes in the Government R&D space…how many programs are funded versus how many of those funded programs succeed in eventually deploying to users. Given the rapid rate of change in technology and the fact that our national strategic competitors are looking very hard for strategic advantage, it is certainly in the U.S. national interest to have a robust scientific community actively researching a large number of areas of interest. In this domain, activity rather than outcomes may in fact be the right metric. Some of the focus on activity is also driven by the Government budget cycle process, and certainly if outcomes are not reliably known for 4-7 years as in the commercial world, this is beyond the next election cycle for most elected officials.

But in that subset where transition to users is important, perhaps even a stated goal, the Government programs seem to struggle. The fact that GAO could not determine transition success rates for either SBIR Phase III or DARPA is one indicator. Plenty of literature speaks of the “Valley of Death” in the Government world, where inventions go to die before ever being deployed.

Among other issues, there are structural reasons for this. The “market” for Government transition is generally Programs of Record, those big, often billion-dollar programs. Those programs run on an entirely different set of principles than the Government R&D world, a set of principles where risk reduction rules the day and innovation may not even be welcome. So most Government R&D programs and national laboratories now have “technology transition” programs or offices, looking to commercialize all those great inventions that have been developed along the way, in some case with associated patents.

The standard model for these efforts has been to look at the outcomes of the early stage R&D process and license the intellectual property, or try and find an entrepreneur who will take it on, or encourage the inventor to become an entrepreneur. Two problems plague this approach: intellectual property transfers much more effectively via people than via paper; and, inventions created and prototyped without the market discipline of external investors and users determining value are almost always poorly optimized for the eventual market they hope to serve.

The programs that have done best at this are those that adopt the most “commercial-like” practices: listen to your customers and end users, get early feedback, understand the needs, understand the price points, worry about transition to market. When GAO looked at a set of DARPA case studies, they summarized it this way.DARPA Factors for Success

The good news is that the Smart Cities Actuator instills the commercial version of exactly this set of principles. While the Federal government can focus significant investment on technology development, it seems that the best Government programs are about as good as the best commercial programs. The difference is not the amount of money but the set of practices that make transition effective.

Next (Monday 3/20): How do we know the Actuator is Working? Part 3 – Corporate/University Programs

What Is a Smart City?

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

As CIT and Smart City Works developed our new Smart City Works Actuator, this question kept coming up from just about everyone. Some people just asked. Others knew a few reference points: I know so and so city is doing smart parking meters or smart street lights or smart trash collection…is that what you mean? Still others referenced the technology components: do you mean broadband, or Internet of Things (IoT), or cybersecurity, or autonomous vehicles? A few asked the meta questions: will this improve resilience, will this enable the surveillance state, will this improve people’s lives?

The standard web definitions were not much help. Wikipedia has: “A smart city is an urban development vision to integrate multiple information and communication technology (ICT) and Internet of Things (IoT) solutions in a secure fashion to manage a city’s assets – the city’s assets include, but are not limited to, local departments’ information systems, schools, libraries, transportation systems, hospitals, power plants, water supply networks, waste management, law enforcement, and other community services. The goal of building a smart city is to improve quality of life by using urban informatics and technology to improve the efficiency of services and meet residents’ needs.” This is helpful, and brings Quality of Life to the table, but does not provide much guidance on how to build one, or how these many and varfs_gfx_smart-cities-concepts-v1ied pieces fit together.

Sarwant Singh, based on a Frost & Sullivan
study, provides a fairly typical definition, “We identified eight key aspects that define a Smart City: smart governance, smart energy, smart building, smart mobility, smart infrastructure, smart technology, smart healthcare and smart citizen.” Lots of smarts and interdependencies, but not much structure.

So we developed otriangle-defur own definition, one based loosely on the old communications stack model, where each layer of the stack depends on services provided by the layer below it. Note in this version we have explicitly included the 22 City Link™ platform at the Link layer, since an initial implementation of this vision will be with our partners at Gramercy District where the 22 City Link™ platform is being piloted; other communities may have different Link layer implementations.

Several things stand out in this working definition:

  1. It explicitly ties technologies up the stack to the people-centric use cases around improving quality of life
  2. It provides context for things such as data collection or cybersecurity or autonomous vehicles…we don’t want to do them just because we can, but because they achieve some goal in the context of a set of infrastructures. This context also helps open up questions along the lines of: what is the proper balance between the privacy of people transiting the urban environment, and data collection for use by retailers…who owns the data, what permissions are needed, can it be re-sold, how long can it be retained, etc.
  3. For the Actuator, it will help us help innovators understand where they fit in a larger picture, which will aid them in defining the boundaries of what needs to be included in their specific product offerings. Furthermore, this provides fodder for discussions of how to scale a product. It is fantastic to be able to demonstrate a product in the friendly, custom confines of Gramercy District, but proving that a product can scale, and is thus investable, requires that it function in a wide range of built environments and infrastructure, old and new.
  4. It can be useful in early “diagnostic” discussions…for developers the discussion includes topics like “what do the buildings look like”. For communities whose vision is something like “become a smart city” but are underserved when it comes to connectivity, it provides a starting point for a longer term strategic growth plan that begins with “first, get connected”. For larger platform companies it may help make the externalities explicit for ongoing product evolution and understanding the sweet spots and limitations for existing products.

Our collective understanding of Smart Cities is evolving rapidly as the innovation process and ecosystems begin to explode. Hopefully this working definition will provide a more stable framework for understanding where and how these innovations can ultimately serve to improve our quality of life.

Next (Monday 2/27): What Is an Actuator?

A Tale of Four Cities (with apologies to Dickens)

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair…” Charles Dickens, A Tale of Two Cities

Since the beginning of 2016, it seems like the worst of times. We have seen a correction in the stock market as the Chinese economic bubble has popped, taking the global oil markets with it, and bringing back the all-too-recent memories of the Internet bubble of 2000 and the financial bubble of 2008 (watch out, 2024!). The misery has spread to the Tech sector. The unicorn, unofficial mascot of Silicon Valley, which had gone from being a rare beast in 2014 to a veritable population explosion in 2015, is once again on the verge of extinction.

Yet the economic talking heads tell us this is normal, that the U.S. economy is doing well and is reasonably insulated from both the Chinese economy and the negative oil shock. That corrections are a necessary part of the market, to restore balance after a period of irrational exuberance. So, what the heck is going on with Tech?

In 2015 I was Principal Investigator for a DHS-funded program called EMERGE, working to leverage commercial business accelerators to help commercially-focused innovative companies bring some of their technology to address needs of the DHS community. As part of this program we were fortunate to get an inside view of four different business accelerator programs in four different cities:

Here is what I learned. First, tech innovation does not occur in isolation; it is the result of effective regional innovation ecosystems that include customers, entrepreneurs, funding sources, a high concentration of expertise and ideas, and enough of a support infrastructure to help the entrepreneurs through the early pitfalls. Each of the four accelerator programs above has done an outstanding job of helping build and then leverage their local ecosystem as an integral part of what makes each region grow.

Second, Silicon Valley is not identical to the Tech sector. Although news coverage often glosses over this fact, innovation occurs in many places across the country. I will argue below that while Silicon Valley is indeed unique in many ways, generalizations based on that unique set of circumstances can often be wrong. In the current situation, the doom and gloom based on over-priced investments there is less relevant in other parts of the country.

And so, the four cities.

Dallas – Texas has several innovation centers including both Dallas and Austin. There is a diverse industry base, with concentrations in energy, health care/life sciences and tech, significant university presence, and a good concentration of wealth. Tech Wildcatters has successfully provided leadership to the region’s startup community with special programs in both health care and tech, and most recently going to a year-round program from the more typical discrete sessions. Dallas is a vibrant startup location, although it is unclear what effect the collapse of oil prices may have on access to capital in the region.

Chicago – political issues aside, Chicago has the benefit of a high concentration of Fortune 500 Corporate Headquarters, a robust investment sector and strong University presence. TechNexus has done a masterful job first in priming the innovation ecosystem development 7 or 8 years ago, and now tapping into the innovation needs of Corporate strategic partners who are looking to early stage companies as a source of new products and ideas. If the city can recover from its social strife it is certainly positioned to continue as a significant center of tech innovation.

San Francisco – San Francisco/Silicon Valley is the undisputed investment capital of the world for tech. According to Pitchbook in the third quarter of 2015 more than 27% of all the venture capital invested globally came out of Silicon Valley. China has risen rapidly as both a source and target of VC investment, Slide2although the collapse of the economy in China seems certain to be a major setback in this area, as the graph seems to indicate starting in Q4 of 2015. New York ranks third on this list, providing just north of 8% of the globally invested capital.

Yet with all that money floating around it appears that some Silicon Valley investors may have had more dollars than sense. If you look at the number of deals and the dollar amounts as compiled by Pitchbook, the dollars invested continued to rise in 2015 even while the number of deals plummetSlide4ed, leading to a rapid rise in median valuations.

Slide1By comparison, valuations in New York during this same time were only 10% of the San Francisco valuations, an enormous disparity. Slide3There are some possible alternative explanations for this disparity (bigger opportunities, move towards later stage investments, etc), but both the anecdotal evidence at the time (“too much money chasing too few deals” was a sentiment we heard more than once) and the subsequent down rounds of investment even for some of the high flyers indicates over-valuation on the part of investors was at least one primary cause of the disparity.

A second point. Why on earth would you want to locate and operate a company in the outrageously expensive environs of San Francisco where none of your employees can afford to live? ST AptsOr Palo Alto, where Palantir is driving out start-ups by snapping up office space at high rents. Well there are certainly some reasons: if you want to hang with the cool kids, California is the place you ought to be. If you need to raise a billion dollars or so, where else would you go? And certainly if you want frothy valuations during the good times, the target destination is clear.

A recent Harvard Business School study (http://www.hbs.edu/faculty/Publication%20Files/09-143.pdf) hinted at one possible evolution of this trend. According to the study:

“Venture capital firms based in locales that are venture capital centers outperform… [as a result of] outsized performance outside of the …firms’ office locations…”

That is, if you are a VC you want to be in one of the centers of VC activity because there is a strong ecosystem of investors…but, the big returns are to be found by investing in other places. Certainly Silicon Valley is not going away as the primary center of activity. Increasingly however, those investors seem to be syndicating with other groups in places such as Dallas, Chicago or…

Washington DC – The region centered around Washington DC is generally considered to include Maryland, Virginia (or at least Northern Virginia), and DC itself. The Federal Government is a large presence, along with some of the specialty areas such as cybersecurity and data analytics it has helped develop. Health care/life sciences is also a major player in the area, and there are multiple world-class universities that support the ecosystem. The region generally ranks in the Top 10 innovation areas of the country, and the area’s capital investments are growing, actually increasing in the 4th quarter of 2015 even while investments were declining nationally. One reason for this increase is the growth in cybersecurity, with the potential for more than a billion dollars in cybersecurity investments in the region in 2016. The two biggest areas were health care/bio and software (including cyber), and there is an organized, active ecosystem working to promote the growth of these and other industry sectors.

Conclusions – Clearly the stock market is in correction territory, driven initially by economic issues in China and the energy sector. While the tech sector also appears under pressure, the fundamentals here are very different. In the short term, what appears to be a broad retrenchment in the sector is actually mostly a correction of inflated valuations on the West Coast that are not indicative of the sector as a whole. As Rick Gordon, Managing Partner of the MACH37 Cybersecurity Accelerator puts it: “while Silicon Valley has been out on the great unicorn hunt, we have been building an army of cockroaches…small, fast, nimble, designed to survive a nuclear winter, and available at a reasonable price.”

The age of easy money from building the next mobile app may be behind us, but the advent of autonomous vehicles, personalized medicine, data-driven everything and more will ensure that the tech sector will continue to drive the next wave of innovation and economic growth for decades to come. But it is increasingly likely that the actual innovations will be found in places like Dallas, Chicago and the Washington region even if the investment capital still flows from New York and Silicon Valley.

CSAM Back to Basics: Internet of Things (IoT)

As with the cloud, where virtualization was the key to unlocking the potential, the Internet of Things was also unlocked by a common denominator technology: the drastically falling price of sensors. Sensor costSure, the “Things” of IoT are networked and have computing power at the nodes, but so too does the plain old internet (PoI). What makes IoT different from what came before is the ubiquity of sensors of all types.

One of the first embodiments of IoT in my book was the iPhone 4s (although some of its features had been previewed in the iPhone 3GS). Of course many claims could be made for the first “thing”, but the addition of Siri voice recognition and response, and a more capable chipset make the 4s as good a candidate as any. With that addition the 4s enjoyed a list of sensors including: 3-axis accelerometer, compass, up/down orientation sensor, voice recognition, touch screen, 4 RF sensors (receivers – cellular, bluetooth, GPS, and of all things, FM radio), a light sensor, 2 cameras, the plug controller for both electricity and data…and no doubt more. We tend to think of phones in other categories, but as a sensor platform they are unparalleled.

IoT quantities

Most estimates are now calling for upwards of 20 billion IoT devices by 2020 (which BTW necessitates IPv6 to provide addresses for all those devices…see Oct 9 post for more on IPv6). A the chart indicates, these networked sensors will be everywhere, and in every sector. Since the cybersecurity issues with IoT things are essentially the same as general cybersecurity concerns, one might have hoped that the introduction of IoT would be more security-conscious. Alas, in the rush to get devices to market, we are now seeing the same set of issues with IoT that we have seen for so long in enterprise networks. Security flawsThe chart at right represents one of a number of studies on devices on the market in 2014/2015, and essentially no devices tested were secure from all of the most common vulnerabilities. Worse yet, many components of our critical infrastructure, including airlines, automobiles, and the power grid are now being updated both to bring components onto the network as well as introduce IoT-type sensors for better control…and increased vulnerability.

So what would it take to secure the Internet of Things? Basically the same set of things required for the internet at large…don’t build security flaws into the software, make sure the devices are continuously updated, keep these devices separated from your other operational networks, control access and data flow. Of course IoT is worse than the PoI, not better, due to the many new vendors and users who are less familiar with the risks than the traditional IT security professionals.

The current state of understanding of IoT security is the new ISACA 2015 IT Risk/Reward Barometer, which surveys 7,000 ISACA members and 5,400 consumers from various countries to understand their concerns. Almost three-quarters (73%) of the security professionals believe their business is at a medium to high risk of being hacked via an IoT device, and the same number believe IoT security standards are not adequate. Consumers want more, but are again woefully unprepared for managing all their devices (most underestimate the current number of devices in their household by about 50%) or the data losses likely to result.

The regulatory side is also in flux. The Federal Trade Commission is taking action, seemingly looking towards more broad-based privacy protections rather than anything specific to IoT. Key industry leaders are leery of intrusive legislation or regulation, and the new IoT Congressional Caucus seems to be primarily focused on bringing Congressional colleagues up to speed on how to spell IoT, and the implications. Unfortunately, there are clearly some misguided regulatory efforts. A draft bill in front of the House Energy and Commerce Committee would create fines up to $100,000 for unauthorized access to motor vehicle systems, even if you own the car. This is the latest salvo from the automobile industry, unbeknownst to most consumers, looking to establish that you don’t really own the software that runs your car, or even have rights to inspect or modify it. While this is especially egregious in light of the recent Volkswagen scandal involving software that defeated regulatory compliance, the automobile industry is certainly well-funded and persistent.

IoT will certainly become a pervasive fact of life over the next 5 years. It poses yet another daunting round of technical, legal, privacy and regulatory hurdles that will take years to sort out. As a consumer, your options will sound familiar: be aware of what devices you have and allow into your environment; for home use, control what you allow on your network; be careful what information you allow to be shared, and with whom. But there are more options here as well, given the state-of-play of IoT. You can choose to help influence the debate beginning in Congress on regulating this industry, or on the FTC rule-making regarding your privacy. You could choose to become involved in advocacy groups like EFF or others who are active in these discussions. We do have a voice in shaping our technology future.

 

CSAM: Security in the Cloud

The cloud is not some magical place apart, it is simply another piece of the information infrastructure, and as such has both opportunities and challenges for users and security professionals alike. The cloud offers seductive benefits of speed to provision, reliability, elasticity, lower price, mobile access and more. A new study by Dell finds that companies that have invested in cloud, big data, mobility and security are seeing about 50% faster revenue growth than those that did not. But these benefits come at the price of some loss of control over data and security. Additionally cloud opens some new vulnerabilities, for example attacks against the “hypervisors” that manage the virtual machines.

For IT departments, the cloud has turned the old security paradigm on it’s head. Used to be that security specialists could draw a clean boundary between “our network” and “out there”, and try to keep bad things from crossing the boundary.

caernarfonWith the cloud, and the rise of mobile devices, the boundary is no longer relevant in many cases. In theory, anybody can access Corporate data in the cloud from anywhere, and with the use of BYOD devices, the boundary is often inside the devices themselves…one reason device security and access control is becoming more prevalent. Current thinking is moving towards “hybrid cloud” implementations, with an attempt to keep more valuable data closer to home and thus re-establishing at least some semblance of a virtual boundary.

But, users are not to be deterred. Security vendor CipherCloud analyzed a year’s worth of cloud usage data from its enterprise customers and discovered that on average, North American companies used about 1,245 cloud applications. Of that number, an astounding 86 percent were unsanctioned applications that IT groups had little idea were being accessed from inside the enterprise network. The CipherCloud article cites “John Pescatore, director of emerging security threats at the SANS Institute, [as saying] much of the risk can be mitigated if IT is responsive to business needs. Employees and business groups often sign up for cloud services they need on their own because it is faster than waiting for IT to provision it for them, he says.”

Still, the cloud can be safe with appropriate controls. One of the biggest cause of data loss in the cloud, for example, is the loss or theft of devices (one-quarter) or other types of employee negligence (one-third). These numbers indicate that simple education and loss remediation (device wiping for example) can substantially reduce the risk.

So here is the bottom line. Like many issues in cyber security these days, it is all about risk management. If you are a Governmental Agency and have high value personal data, or a defense contractor with classified information, or a critical infrastructure component with sensitive operational data, then your tolerance for any data loss is very low. In these cases you need strong security controls no matter where your data lives. If you are a small business, the big cloud providers are likely to be able to provide much better protection for your data than you can provide, at lower cost and better reliability, and can deliver surge or rapid growth capacity that you cannot match.

For individuals, the best advice is to know where your data lives and how to control that. Much of the data, photographs for example, on our mobile devices are duplicated to the cloud by default (hopefully the proprietary cloud of the provider), but you can generally choose to disable this function. Many apps split data storage between your local machine and the cloud, running some bits locally and some remotely. Some apps are more greedy and “verbose” than others, scooping up any data they can access on your device and sending it elsewhere; there is a growing body of literature on these app characteristics, and you can choose not to use the worst offenders.

One last thought for individual users is the cost associated to cloud storage. For businesses the cost benefits are generally clear, but for consumers cloud is generally much more expensive. Commodity portable disk drives storing a terabyte of data are now barely $100 at the big box stores.

If you buy oterabyte drivesne, put your data on it and plug it in when you need access, it can cover your data needs for a very long time (ok, buy two for reliability or even a third one to stick in your safety deposit box for offline storage). Compare that to the cost of a data plan that would allow you to access that same data. Verizon is quoting $100 per month for 18 GB of data access, about 50X more expensive, not counting the cost of the storage itself. Not quite apples to apples, but certainly consumers need to consider the true cost of their cloud storage and access.

CSAM Industry Vertical: Finance

When most people think of cybersecurity, they think of IT departments protecting corporate networks, or individuals at home on their personal computers. But cybersecurity is differentiating rapidly as more people realize its actual goal is to improve the reliability of some other business process or product, and not an end in itself. Since these business processes vary widely from one industry to another it makes sense to talk about the unique issues and approaches faced by individual market verticals. Today: the financial industry.

The financial industry, including banks but also financial markets (stock exchanges), credit card companies and others, were one of the first commercial industry verticals to realize this need for cybersecurity. In part of course this realization came about because the financial industry is where the money is, and so they are subject to direct, frequent attacks for purposes of fraud or theft. Additionally however, as we have learned during the occasional market glitch or through the application of international sanctions, the financial industry is an essential component of commerce both domestically and internationally. As such it is also the target of attacks from those seeking to do economic damage or bring down portions of our country’s critical infrastructure, including very high volumes of Distributed Denial of Service (DDoS) attacks whose goal is simply to disrupt.

As a result of these pressures, the industry has been a leader and early adopter of security technology. Most banks now routinely use two-factor authentication, for example, for online transactions. The Financial Services – Information Sharing and Analysis Center (FS-ISAC) was one of the first and still one of the strongest information sharing groups, as the banks realized they were not competing on the basis of security, but rather treated it as a common good. The industry has it’s own established mechanisms (BITS) for vetting technologies of interest, and its own FinTech technology accelerators for transitioning new technologies. Many banks have thousands of employees and millions of dollars dedicated entirely to cybersecurity as one aspect of their overall security posture.

But the picture is not entirely rosy, and the industry seems to be entering a period of rapid change. In spite of all the security measures, financial institutions routinely write down several percent of their revenues to loss or fraud, increasingly online.  Banks are increasingly pushing back on their liability for these losses. Financial institutions DO compete on ease-of-use for their customers, so the cumbersome two-factor authentication processes remain open to significant improvement.

On the technology side, mobile payments from Apple, Google and others are bringing structural change to the industry as well as technology changes such as “tokenization” that stop the transmission of actual credit card or other account data by replacing it with transaction tokens. For the insurance side of the industry cyber-insurance is growing rapidly in spite of the fact that the equivalent of actuarial tables for risk or even best practices for reducing risk remain a work in progress. And e-currencies such as Bitcoin and the underlying technologies they use, initially fought by the industry, are now beginning to gain traction as the transactional, regulatory and technology components of the industry all try and understand the implications.

As a user of financial services there is not much that you can do to directly affect these evolutions. Probably the best you can hope for is understanding the risks of various types of financial transactions and your level of comfort with them (do you use e-pay systems? how about Bitcoin?), and adjusting your behavior accordingly. I’m certainly not a financial advisor, but for me segmenting financial accounts across various institutions and transaction types provides a direct method for comparison shopping across security features, and at least feels like it reduces my overall  exposure to any given fraud or attack. There is a lot at stake in the continued effective functioning of our financial institutions, and the future of this critical infrastructure depends on the collective ability of the industry to manage and adapt to change.

CSAM Back to Basics: Cloud

Sometimes, a single capability becomes a major technology differentiator. Although the emergence of cloud storage and computing relies on mobility and networking, the real enabling capability is virtualization. There are two threads to this story.

First comes the evolution of data centers. Once upon a time, enterprises actually bought lots of hardware servers and put them in special closets to serve the enterprise computing needs. Then, some clever CFOs realized they could save money by leasing these computers instead of buying them. Not long after, the special closets went too, when whole buildings owned by third parties sprang up that would lease space protected by cages to house your leased servers. At least at this stage, an IT manager could go to the building, point into a cage, and say “there’s OUR servers”.

It didn’t take long before some clever soul realized that a whole lot of leased servers were sitting in a whole lot of cages, not really doing very much most of the time. Wouldn’t it be great, they said, if we just owned all the hardware and leased time to companies on whatever server was least busy at the time; after all, nobody actually ever comes to point at their servers, and in fact nobody really much cares as long as they get answers when they need them.

The second thread came from the techno-nerds. Wouldn’t it be totally awesome if we could make one type of hardware pretend to be another type of hardware so it could run all the programs written for both machines. Apple of course was an early adopter, putting an “emulator” on their Macs that worked just like the Microsoft operating system running on PC hardware, so that Apple users could run Microsoft Office (Microsoft released some proprietary changes shortly thereafter; not to be outdone, entreprising coders soon put Mac emulators on their boring office PCs). It worked, but not all that well…yet the concept of virtualization had seen the light of day.

These days, the “machines” that run software in big data centers are almost always virtual. A single hardware server can run several different virtual machines at the same time. The supporting structure has also improved dramatically so that a new virtual machine can be created on-demand. And, it turned out to fit perfectly with all those under-utilized servers sitting in their cages…The Cloud was born.

Providers such as Google, Amazon Web Services (AWS) or the other major providers can lease you a server, available immediately, at prices of cents per CPU-hour…a virtualized machine that can be created as you need it and disappear when you are done. Of course the data centers do buy racks and racks of actual physical machines, and now use significant fractions of the world’s total energy output to run and cool these machines. Companies have not abandoned their own data centers either…the evolution now is to implement “hybrid cloud” a combination of on-demand public cloud resources for surge capacity, reliability and so forth combined with company-owned “on-premise” virtual servers for protection of data assets, backup, and similar reasons.

The transition to the cloud has turned the IT world on its head. No longer is it possible to draw a boundary around your system when much of your data is “out there”. Since it is now possible to access the data from anywhere with an internet connection, why can’t people use their own devices for work purposes? The cloud has also introduced a whole new set of cybersecurity issues: where does your data live? who has access to it? how do you provide security controls to protect your data? what new kinds of attack are available? Check out the next Daveknology CSAM Back to Basics post for a discussion of these issues.

CSAM Back to Basics: Software

Software makes the machines hum, whether the now-ubiquitous mobile apps, the enterprise-grade software suites, or the under-the-hood workings of the operating system. There are two classic types of software, compiled and interpreted (along with many other variations and groupings). Compiled code is written as a complete unit, whose instructions are then treated as data by a program called a compiler which converts or translates it to binary code. This is an expensive process, and not very flexible, so why does this approach persist? Good compilers can optimize the performance of the resulting binary code and catch or correct many errors, so code that is used often, consumes a lot of computing resources, or requires fewer errors is often compiled. Coding languages such as C++, FORTRAN, LISP, or Java are generally compiled languages. In theory you could “prove” these programs are correct, that is, do exactly what you want and nothing else, although this is rare in practice since it is very, very expensive.

Interpreted languages on the other hand support much faster implementations and work by simply executing instructions when an input is received. This is common in applications such as browsers that can wait for you to hit the “Return” key and then take you to a desired web page (makes you wonder what they are up to the rest of the time…) Perl, Python, Ruby and JavaScript are all examples of interpreted languages. Faster, more flexible, interactive code is a huge positive, but it is difficult or impossible to test the infinite, arbitrary set of possible inputs and so the code is more prone to behaving in unexpected ways. Some cybersecurity attacks in fact are specifically designed to find and exploit these unexpected behaviors. In modern software practice these distinctions are increasingly blurred, with just-in-time compilers or line-by-line debuggers that allow for optimization of the resulting code in accordance with the purpose.

Software errors are often the entry point for cyberattacks. So why are there so many errors? Sometimes it is simply bad programming. Error corrections against one type of attack, cross-site scripting, have been well-known and publicized for a number of years, and preventing them is simply a matter of using good coding practices; yet, they persist. Much more often however, it is systemic issues with the coding process that create the errors and occasionally trip up even the best software engineers. Here are a few:

– Complexity. Yes, there it is again. It is almost impossible to write error-free code for anything but the simplest of systems, much less the enormously complex systems commonly in use today. It’s like writing a long novel without any spelling errors. But worse, since not only the spelling and grammar have to be correct, so too do the logical constructs implemented in the code. Perhaps this is more akin to writing the perfect mystery novel, where all the clues both necessary and misleading have to be present in exactly the right order and proportion so that readers have the information required to determine “Whodunit”, but generally do not.

– Size. Software development is a team sport, often involving large, distributed teams over an extended period of time. One study looked at over 4,000 completed software projects since 1994 and analyzed the team sizes involved. On average, teams of 30 or more people took just under 9 months for a project of 100,000 lines of code. Astonishingly, teams of 5 or fewer people completed 100,000 lines of code in just over 9 months…only one additional week. The difference was in the rate that errors were both introduced, and discovered and fixed. Certainly a management issue, but also an indicator of the penalties that size and complexity introduce.

– Functionality. The first order of business when writing software is to get it to do the functions you intend, and this is often tricky enough. The negative, making sure that the resulting software doesn’t do anything you did NOT intend, is both far more complicated and often under-appreciated by businesses whose revenue depends on shipping working product.

– Test incompleteness. Basically, you can’t test quality into completed code. There are now good statistics on the expected number of errors per 1,000 lines of code, and the reduction (never to zero) of testing at various stages of development. The best practices are to write good code in the first place, and try to find errors early.

– Shared code. Almost nobody writes large applications from scratch these days. Open source code and code libraries such as GitHub are rational positive ways to reduce the cost of software development. But it means that nobody fully understands the code they deliver, or can be sure of the types of errors that may lurk within. Crowdsourced testing of open source components seems to result in better code, but the last couple years have also seen cyberattacks built around errors in some of the ubiquitous underlying modules form the open source libraries.

So what can a user do? Keep your computer’s code base current by implementing the patches routinely pushed by software vendors. Be careful where you get your software. For example, Apple has been well known for controlling the app publishing process much more closely that Android, which at least initially allowed almost anybody to publish code. The result is that Android phones are attacked much more frequently than Apple phones. And be aware of the unexpected ways in which software behaves. For example, a number of articles are now being written discussing how google maps tracks the location of users, and how to prevent this if you desire. Mobile devices allow you reasonable control over many of these types of functionality, so it is worth the time to configure them properly.