How Do We Know the Actuator is Working? Part 4 – Synthesis and Policy

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

Time to pull together the thoughts and data from the previous three posts. Three sections here: how do we know the Actuator is working; are there ways to improve the commercialization success of invention organizations such as Universities and National Labs; and, are there ways to improve the outcomes of the national R&D enterprise

The Actuator

OK, back to the original do we know the Actuator is working? As a participant you will know fairly quickly how it is working for you, once you learn to smooth out the day-to-day highs and lows of being an entrepreneur. However overall at a portfolio level we also carefully track performance metrics against industry norms, and our performance here is very strong. We use these metrics to fine-tune the program content and focus and inform specific mentoring actions; this continued engagement over the longer term is a strong signal that the Actuator continues to work for you. In addition to investment metricReference model fulls, we will also track performance in terms of various paths to market that may or may not involve direct financial investment, such as revenue growth and job creation of the companies we mentor.

But those metrics for a portfolio can take 5-7 years to fully mature. In the intermediate term we track leading indicators of later success. Some of these include pilot opportunities, early customer adoption and similar measures of market traction, even things like press coverage. Where needed, we also use these metrics to indicate additional areas where the Actuator can provide ongoing support to our graduates as they mature their businesses.

For the short term the primary assurances are the combined experience of our CIT and Smart City Works staff in the specific market verticals we are addressing, our extensive direct experience in early stage investing, a deep understanding of accelerators and best practices about what it takes to help early stage companies, and the strength of our community of mentors and experts. As an Actuator entrepreneur you should experience all of these, and they are your clue that the Actuator is indeed working.

Inventor Organizations

In this category I would include organizations like Universities, National Laboratories, Government development organizations or programs and the like. Obviously not all of these organizations, and obviously not every one to the same degree, but the generalization here is that these researchers look first to develop the best technology, then only later think about possibilities for commercialization. In some ways this, along with our strong national basic research capacity, has been the jewel in the crown of American global competitiveness for decades. But as budgets have consistently tightened and questions about Return on our research Investment have grown, this open-loop system that grew up in the aftermath of World War II may need some tweaking.

I would proffer three possible fallacies in this development approach in today’s environment. First is the belief that the quality of the technology is what drives the success of commercialization efforts. We evaluate a lot of companies for potential investment, and a common rule of thumb is that about 50% of an investment decision is made on the basis of the entrepreneurial team, perhaps 30% on the market dynamics (size, competition, path to market opportunities) and only the remaining 20% or so on the technology itself.

A second fallacy is that the researchers or developers know what the market wants; they are as a group incredibly smart and talented people who have relied on their judgement for success throughout their career. Like our entrepreneurs, they are almost always wrong with their first guess on what the market wants. This is why most companies in the early stages of development “pivot”, meaning substantially change something in their original product concept. One of the reasons that commercial markets are so successful is their relentless, continuous pressure to deliver, deliver ever better products, and deliver only what the market will pay for. Responding to this pressure is what makes companies continuously improve, and developing technologies in isolation from this pressure only delays the inevitable reckoning.

The third fallacy, somewhat related to the first two, is that what researchers and developers do is “innovation”, and innovation is what the market wanInnovationts. Jon Gertner in his great book The Idea Factory, about the operation of Bell Laboratories during the development of our national telecommunications network indicates the Bell Labs working definition, shown in the Figure.

What researchers and developers do is often Invention by this definition, but Innovation is really the end result of what we now call commercialization activities. Perhaps markets do want innovation, but it is important to be clear about what that means.

Is there a way to address these issues and improve the innovation outcomes for these Inventor Organizations? I believe the answer is “yes”. We are now exploring ways to connect our commercialization expertise directly to the research, inventions and entrepreneurs within these Innovation organizations. Demonstrating success in valuing IP, in business models that appropriately share the positive outcomes of commercialization, and in partnerships that overcome the biases that each side brings may well help improve the ROI for our Invention community.

Federal Government

The Federal government and its interactions with the R&D community may be in need of the biggest update. Many people point to the very cumbersome Federal Acquisition Regulations (FAR) as a road block to innovation. In fact, our experience is that the government probably has most of the legal authorities and mechanisms it needs to be much more effective as an R&D enterprise, but long-standing practices and cultural norms are really a much larger impediment.RandD spend corp gov

One issue is that the Government in many ways still acts as though it is 1950 when Government R&D spending was the dominant source of funding and the Government was large enough to constitute the primary market for innovative companies. This is no longer true, and in fact the relative market positions of the Government and commercial worlds have essentially reversed. The commercial world now spends twice as much on R&D as the Government, and represents a much larger market for innovative companies with more rapid paths to success.

A second recurring issue is Intellectual Property. Government encumbrance of small company IP in exchange for $50K or $100K development contracts makes those companies essentially uninvestable. Yet there are mechanisms in the Government contracting arsenal that do not require this encumbrance, and the value to the Government of locking up IP at such an early stage is minimal at best. So why does this practice persist?

Finally there is the structural problem. In the commercial world a path to market is critical. In the Government market, development support dries up around the SBIR Phase III point (working prototypes at some degree of maturity), followed by limited transition support to the uncertain market of large procurement programs. Why an uncertain market? Government program managers are incentivized to be risk averse, and new technology is almost never operationally robust when it is first introduced. The path to market for these large programs is most often through big systems integrators, and this is inherently risky for that precious IP. And, Government procurements are notorious for delays in awards, changes in scope and similar vagaries that can put a small company out of business long before a contract is ever awarded.

Here too there are ways to improve these outcomes. Certainly more support for transition programs that take interesting prototypes and help mature them would be a step in the right direction. The Governmenthas numerous test and evaluation capabilities that could be appropriately harnessed for this purpose, well within the limits of current contracting comfort zones. Adoption of more commercial-like practices such as those employed by some successful Government programs (In-Q-Tel, SBIR for example) can help get early market feedback and sufficient market competitive pressure to foster continuous evolution of interesting ideas. Increased use of staged awards such as SBIR, where only Phase I recipients are eligible for Phase II and so forth would help level the playing field for small companies, instead of so much of the innovation dollars going to incumbents working to develop ideas in-house with only limited external review and pressure.

There are others. NASA has placed much of its software in the open source domain, providing both valuable initial IP to innovators as well as fostering increased interaction between NASA and the innovation community. Our EMERGE program with DHS adopted a “commercial-first” approach, transitioning commercial technology into Government uses instead of trying to push Government-developed technology out.

Even the Chinese might provide an interesting model. Their “Made in China by 2025” initiative may sound like industrial policy, but seems to rely on commercial development of commercially viable products within broad sector definitions established by the Government. The implied quid pro quo is that the Chinese Government will then buy products from the best of those commercial companies.


So there it is, the 4 Part series on the Actuator, Innovation, and the various sectors of our economy that provide innovation. Improving success in this arena is indeed a wicked problem but there is room for substantial improvement simply by thinking about our collective goals and improving some of our innovation processes. Both our commercial and our national interests may be at stake.

Next Post: Smart City Actuator Focus Areas – Transportation

How Do We Know the Actuator is Working? Part 3 – Corporate/University Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

Parts 1 and 2 of this series looked at methods for measuring the success of commercial and government programs for accelerating innovation. The other major sources of innovation in the U.S. economy are Corporate R&D efforts and University programs. How effective are these sources for driving innovation? It is hard to generalize, given the very large number of both Corporate efforts and University programs and the paucity of data, although the anecdotal data is mixed to negative.

It does seem as though our model of funding leverage and transition probabilities could be adapted to measure success here as well. For this analysis however, alternate data sources provide some quantitative clues that support the anecdotal data. On the Corporate side there are certainly large company innovation success stories. Apple for example has certainly had success with their early mass market personal computer, photo_mac84pand more recently with their mobile devices (iPhone, iPad). One of the more amazing companies in this regard has to be General Electric who, over the last 50 years (based on where their profits originate), has gone from a heavy equipment manufacturer to a financial services company to the current big data company that also happens to be one of the world’s leading manufacturers…and always as a market leader.

But the anecdotal and other data tell a different story. Many accelerator organizations around the country conduct essentially consulting engagements with large Corporations to help them engage tFailureshe innovation ecosystem. Data show that 86% of the original 1955 Fortune 500 companies no longer exist, and that 50% of the Fortune 500 in the year 2000 have also failed. The average life expectancy of all companies is only 12.5 years, and the failure rate seems to be increasing. According to Forbes, the reason “Why U.S. Firms Are Dying: Failure to Innovate

One solution might be for Corporations to invest more in R&D. Alas. <re/code> reports on a clever study from Bernstein Research analyst Toni Sacconaghi that looked at “historical R&D spending figures as a percentage of sales for publicly traded tech companies with market caps of $2 billion”Large Cap Stocks, then tracked their stock performance 5 years later when presumably the research might have paid off. The chart summarizes his data, and concludes that there is “no meaningful relationship between R&D spending and stock performance”. I had a hard time deciphering the chart, so decided to put it in a more graphical form, and insert a few trend lines. The results were actually worse than SaccoCorporate Optimum R&amp;Dnaghi was willing to conclude. Not only is the regression line negative…the more you spend the worse your results…the standard deviation decreases with spending, meaning that the likelihood of a poor result is much higher the more you spend. Looking at this chart as a non-financial person, it is not clear to me why anybody would invest in companies spending more than 20% of sales on R&D. As a side note, something really interesting seems to be happening around a magical number of 7-8% R&D spending, but I haven’t a clue what that might be…ideas welcome!

How about Universities? Many of them have Tech Transfer offices for their research-derived intellectual property (IP), and again there are certainly success stories in MIT/Harvard, Carnegie-Mellon, Stanford and others. Anecdotally however these offices, many based on a traditional licensing model, have overall not been incredibly successful. Two reasons are often given: first, IP transfers much more effectively via people than via paper so licensing models without researcher support are not effective; and, second, some Tech Transfer offices like to value the IP on the basis of research dollars spent, not on market value. As one of my colleagues put it “when I have these discussions the Tech Transfer Office will often tell me the value of the IP is $10M or something; I tell them it’s worth zero unless somebody takes it to market, and we negotiate from there.” Fortunately some of the more forward-looking Universities are starting to migrate towards a shared risk/reward model where the value of IP is set by marketplace outcomes.

Is there an explanation for this phenomenon? One possible answer lies in where the funding for research comes from, and where it goes. Again an obscure chart, this one from NSF.Funding sources After staring at this one for a while what I really wanted to know was net funding flows. For example in Basic Research the Federal Government funds about 60% and only conducts about 10%…where does the other 50% go? Mostly to Universities, which fund 10% and use more than 50% of the Basic Research dollars. OK, easy enough to turn into a graph, where the nodes are the sources, scaled to size by percentage Research fundingof funds sourced, and the graph edges are the net flows, also scaled to size. By applying some awesome powerpoint skills and a garish color scheme, the Basic Research picture looks like this. Half of all research dollars go from the Federal Government to University-based research, with a small additional amount funded by Industry, totaling about 2/3 of all research performed.

Now applying the same analysis to the Development portions of the NSF chart yields the following.Development funding Here, almost ninety percent of all Product Development activity is funded and performed by Industry, with some support from the Federal Government, while Universities are almost invisible. No wonder there is a bit of a disconnect; Universities apparently are not at all focused on commercializing their research, if the funding numbers are to be believed. One last chart provides the summary. Looking at the per capita number of startups based on University-developed IP the numbers have been dropping for a while. More to the point, the numbers are low. For Virginia, for example, these numbers equate to about 20 startups per year. Our cybersecurity accelerator, MACH37, evaluates more than 100 companper capita unievrsity startupsies per year to select a dozen participants just in the area of cybersecurity. Numbers are similar for most venture investors, with only single digit percentages of the number of deals reviewed actually resulting in investment. Thus for Virginia this may equate to one or two investable companies per year, based on University-generated IP. To be fair, this probably underestimates the amount of entrepreneurial activity generated by University programs and proximity, but is probably reasonably accurate in terms of the Institutional success, based at least on the anecdotal evidence.

It is clear now, having looked at innovation and technology transfer across commercial accelerators, the Federal Government, Corporations and Universities that successful innovation is one of those “wicked problems.” While there are successes across each of these domains, there are no magic bullets, no guaranteed approaches for innovation. So, how do we know our Smart City Actuator is working? And, are there ways to make this entire national research enterprise more efficient? We will explore those questions in Part 4 of this series.

Next (Thursday 3/23): How do we know the Actuator is Working? Part 4 – Synthesis and Policy

How Do We Know the Actuator is Working? Part 2 – Government Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

In the last post we looked at commercial accelerator/investment programs and presented a methodology and results allowing more or less direct comparison of outcomes for these programs. The original study was funded by the Federal Government however to look at how these commercial approaches compared to government innovation programs.


There are a number of well-known innovation activities across the Federal Government, including the Small Business Administration’s SBIR program (Small Business Innovation Research…check out CIT’s own Robert Brooke as an SBIR 2016 Tibbetts Award Winner!), the Defense Advanced Research Projects Agency (DARPA) and its programs, In-Q-Tel which is the intelligence community program that uses commercial-style practices to encourage government-relevant innovation, and others in the various military services, national laboratories, contractor activities and elsewhere.

Access to data on success rates and outcomes is as fragmented in the government domain as it is in the commercial world, in some cases compounded by classified work or other policies specifically designed to limit information access. But as in the commercial world, the best-known and most successful Government programs are not shy about sharing their results. Special commendation goes to the SBIR program and In-Q-Tel, both of whose data sets have proven invaluable.

The SBIR program identifies three phases of development, with Phase I generally less than 6 months and $150,000, Phase II generally less than $1,000,000 and 2 years, and Phase III externally (non-SBIR) funded, providing a reasonable basis of comparison in our model. The SBIR program publishes good data on the percentage of proposals and the amounts awarded at each Phase, allowing for a robust analysis, although the Government Accountability Office (GAO) did find that the data are insufficient to determine DoD SBIR transition success from Phase II to Phase III. One additional twist is that the Navy instituted a program from 2000 to 2015 called the Transition Assistance Program (TAP) to provide mentoring support to these early stage researchers, and that data is also available in at least one study looking at the period from 2005 to 2008.

DARPA was a bit of a surprise. When the GAO tried to assess the transition performance of DARPA projects they concluded that “inconsistencies in how the agency defines and assesses its transition outcomes preclude GAO from reliably reporting on transition performance across DARPA’s portfolio of 150 programs that were successfully completed between fiscal years 2010 and 2014.” Another study puts DARPA success rate at about 3-5 products to market per year over 40 years, which the authors characterize as “quite impressive.” Measuring transition success is clearly not a priority.

In-Q-Tel data were a bit harder to come by, but here we were able to use two sources: their published number on downstream funding leverage, and a calculated number based on external data about the size of the In-Q-Tel portfolio and additional published fundiReference model fullng events. Thus we were able to calculate a performance number and compare it to the published number, again as a check on the validity of the model. All of these results are shown in the Figure. The In-Q-Tel (IQT) data show reasonable correlation between published and calculated numbers depending where IQT falls on the investment spectrum, and also shows that the best Government programs perform in line with the best of the commercial programs.

What about the rest? A couple things seem clear. First, there is much more emphasis on activity than on outcomes in the Government R&D space…how many programs are funded versus how many of those funded programs succeed in eventually deploying to users. Given the rapid rate of change in technology and the fact that our national strategic competitors are looking very hard for strategic advantage, it is certainly in the U.S. national interest to have a robust scientific community actively researching a large number of areas of interest. In this domain, activity rather than outcomes may in fact be the right metric. Some of the focus on activity is also driven by the Government budget cycle process, and certainly if outcomes are not reliably known for 4-7 years as in the commercial world, this is beyond the next election cycle for most elected officials.

But in that subset where transition to users is important, perhaps even a stated goal, the Government programs seem to struggle. The fact that GAO could not determine transition success rates for either SBIR Phase III or DARPA is one indicator. Plenty of literature speaks of the “Valley of Death” in the Government world, where inventions go to die before ever being deployed.

Among other issues, there are structural reasons for this. The “market” for Government transition is generally Programs of Record, those big, often billion-dollar programs. Those programs run on an entirely different set of principles than the Government R&D world, a set of principles where risk reduction rules the day and innovation may not even be welcome. So most Government R&D programs and national laboratories now have “technology transition” programs or offices, looking to commercialize all those great inventions that have been developed along the way, in some case with associated patents.

The standard model for these efforts has been to look at the outcomes of the early stage R&D process and license the intellectual property, or try and find an entrepreneur who will take it on, or encourage the inventor to become an entrepreneur. Two problems plague this approach: intellectual property transfers much more effectively via people than via paper; and, inventions created and prototyped without the market discipline of external investors and users determining value are almost always poorly optimized for the eventual market they hope to serve.

The programs that have done best at this are those that adopt the most “commercial-like” practices: listen to your customers and end users, get early feedback, understand the needs, understand the price points, worry about transition to market. When GAO looked at a set of DARPA case studies, they summarized it this way.DARPA Factors for Success

The good news is that the Smart Cities Actuator instills the commercial version of exactly this set of principles. While the Federal government can focus significant investment on technology development, it seems that the best Government programs are about as good as the best commercial programs. The difference is not the amount of money but the set of practices that make transition effective.

Next (Monday 3/20): How do we know the Actuator is Working? Part 3 – Corporate/University Programs

What Is a Smart City?

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

As CIT and Smart City Works developed our new Smart City Works Actuator, this question kept coming up from just about everyone. Some people just asked. Others knew a few reference points: I know so and so city is doing smart parking meters or smart street lights or smart trash collection…is that what you mean? Still others referenced the technology components: do you mean broadband, or Internet of Things (IoT), or cybersecurity, or autonomous vehicles? A few asked the meta questions: will this improve resilience, will this enable the surveillance state, will this improve people’s lives?

The standard web definitions were not much help. Wikipedia has: “A smart city is an urban development vision to integrate multiple information and communication technology (ICT) and Internet of Things (IoT) solutions in a secure fashion to manage a city’s assets – the city’s assets include, but are not limited to, local departments’ information systems, schools, libraries, transportation systems, hospitals, power plants, water supply networks, waste management, law enforcement, and other community services. The goal of building a smart city is to improve quality of life by using urban informatics and technology to improve the efficiency of services and meet residents’ needs.” This is helpful, and brings Quality of Life to the table, but does not provide much guidance on how to build one, or how these many and varfs_gfx_smart-cities-concepts-v1ied pieces fit together.

Sarwant Singh, based on a Frost & Sullivan
study, provides a fairly typical definition, “We identified eight key aspects that define a Smart City: smart governance, smart energy, smart building, smart mobility, smart infrastructure, smart technology, smart healthcare and smart citizen.” Lots of smarts and interdependencies, but not much structure.

So we developed otriangle-defur own definition, one based loosely on the old communications stack model, where each layer of the stack depends on services provided by the layer below it. Note in this version we have explicitly included the 22 City Link™ platform at the Link layer, since an initial implementation of this vision will be with our partners at Gramercy District where the 22 City Link™ platform is being piloted; other communities may have different Link layer implementations.

Several things stand out in this working definition:

  1. It explicitly ties technologies up the stack to the people-centric use cases around improving quality of life
  2. It provides context for things such as data collection or cybersecurity or autonomous vehicles…we don’t want to do them just because we can, but because they achieve some goal in the context of a set of infrastructures. This context also helps open up questions along the lines of: what is the proper balance between the privacy of people transiting the urban environment, and data collection for use by retailers…who owns the data, what permissions are needed, can it be re-sold, how long can it be retained, etc.
  3. For the Actuator, it will help us help innovators understand where they fit in a larger picture, which will aid them in defining the boundaries of what needs to be included in their specific product offerings. Furthermore, this provides fodder for discussions of how to scale a product. It is fantastic to be able to demonstrate a product in the friendly, custom confines of Gramercy District, but proving that a product can scale, and is thus investable, requires that it function in a wide range of built environments and infrastructure, old and new.
  4. It can be useful in early “diagnostic” discussions…for developers the discussion includes topics like “what do the buildings look like”. For communities whose vision is something like “become a smart city” but are underserved when it comes to connectivity, it provides a starting point for a longer term strategic growth plan that begins with “first, get connected”. For larger platform companies it may help make the externalities explicit for ongoing product evolution and understanding the sweet spots and limitations for existing products.

Our collective understanding of Smart Cities is evolving rapidly as the innovation process and ecosystems begin to explode. Hopefully this working definition will provide a more stable framework for understanding where and how these innovations can ultimately serve to improve our quality of life.

Next (Monday 2/27): What Is an Actuator?

CIT Launches Smart Cities Initiative

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator. Subsequent posts will provide our definition of a Smart City, and more details about an Actuator and how it will help enable the Smart City market.

In conjunction with our partner Smart City Works, CIT just launched a new Smart Cities Actuator, and APPLICATIONS ARE OPEN for the inaugural cohort beginning March 27 at the CIT facility in Herndon, VA. Here is the full Call for Innovation, seeking companies interested in participating in the cohort:

Smart City Works is Open for Applications for the Spring 2017 Cohort in the DC Metro Area to be held at the Center for Innovative Technology

SMART CITY WORKSTM is the world’s first business actuator and a premier business accelerator for improving livability and resilience in cities.  Our unique focus on the built environment aims to dramatically change the way we design, build, and operate civil infrastructure.  With unmatched capability and a world class network of technical resources and cities, we go beyond traditional accelerators to more rapidly move the best technology solutions cities need into the hands of city managers and solution-providing companies.  Our program, conducted in conjunction with the Center for Innovative Technology, is designed to equip companies with the skills, market awareness, and validated products to be highly competitive, growth oriented, and investment ready.

Call For Innovation (CFI): We are looking for entrepreneurs, startups, and companies with emerging products to apply for the Spring 2017 inaugural Washington DC metro cohort, to be held at the Center for Innovative Technology, Herndon, VA.  The program is open to startups globally whose visionary founders are willing to bring their ideas and passions to participate in person in a unique and impactful acceleration process.

In particular, we seek innovative commercial solutions that address significant social and civic challenges—safety, security, livability, and resilience—in urban environments across the United States and the world.

For the Spring 2017 cohort, your solution will focus on one of 3 key areas of the infrastructure challenge:

  • Transport – Solutions that reduce costs, extend serviceable life, reduce congestion, improve parking, improve inter-modalities (car, train, bus, bike, pedestrian), or leverage smart, autonomous, and intelligent transportation solutions to improve our transportation infrastructure network.
  • Resilience and Public Safety – Solutions and/or IoT technologies that address the safety and security of the urban public; that mitigate the impact of rising sea levels, extreme weather events, or other natural or man-made shocks; that protect critical infrastructure; or those solutions that allow cities to be more livable and sustainable.
  • Construction Techniques – Solutions that improve the design, construction, or maintenance of infrastructure; reduce lifecycle costs or improve safety, schedules, or margins.

Important Date: Smart City Works Application Deadline: Applications open now, March 1 we will begin selecting companies until the class is full. Final date for applications is March 10th, 2017

Where to Apply: [        http://www.smartcityworks.ioFor more information: [ ]

A Tale of Four Cities (Supplement)

In the original Tale of Four Cities post one point of discussion was: “Why on earth would you want to locate and operate a company in the outrageously expensive environs of San Francisco where none of your employees can afford to live?”

Apparently lots of people are asking that question. According to, “Tech workers are increasingly looking to leave Silicon Valley”, a trend “highest among people ages 31 to 40, suggesting that people are leaving to find better opportunities elsewhere or to settle down in more affordable areas where they can improve their quality of life.” One chart from the article shows the Share of outbound tech job searches from within the Bay Area among this demographic group. EkZc27psx

Of course lots of factors are likely at work behind these numbers, and the trend could as easily reverse. The data do continue to support the thesis however that successful innovation ecosystems need to balance the needs of all stakeholder groups (including startup employees), and that these factors can get out of whack.

Certainly Silicon Valley will continue to be the primary hub of global venture capital for the foreseeable future. But, writing in Forbes, Brian Solomon reported last year on the 2015 ranking of accelerator programs by researchers Yael Hochberg and Susan Cohen, which for the first time left off the archetypal accelerator, Y-Combinator. Hochberg and Cohen concluded that Y-combinator has evolved into a hands off seed fund, and is  “cashing in on the name it made for itself, [by] raising a multi-billion dollar late stage fund to take advantage of their model that selects great entrepreneurs rather than mold them.” [italics added]

Innovation occurs in many many places, including Silicon Valley, but if these nascent trends continue we may wake up one day to find that Silicon Valley has become primarily an investment center rather than an innovation center.



A Tale of Four Cities (with apologies to Dickens)

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair…” Charles Dickens, A Tale of Two Cities

Since the beginning of 2016, it seems like the worst of times. We have seen a correction in the stock market as the Chinese economic bubble has popped, taking the global oil markets with it, and bringing back the all-too-recent memories of the Internet bubble of 2000 and the financial bubble of 2008 (watch out, 2024!). The misery has spread to the Tech sector. The unicorn, unofficial mascot of Silicon Valley, which had gone from being a rare beast in 2014 to a veritable population explosion in 2015, is once again on the verge of extinction.

Yet the economic talking heads tell us this is normal, that the U.S. economy is doing well and is reasonably insulated from both the Chinese economy and the negative oil shock. That corrections are a necessary part of the market, to restore balance after a period of irrational exuberance. So, what the heck is going on with Tech?

In 2015 I was Principal Investigator for a DHS-funded program called EMERGE, working to leverage commercial business accelerators to help commercially-focused innovative companies bring some of their technology to address needs of the DHS community. As part of this program we were fortunate to get an inside view of four different business accelerator programs in four different cities:

Here is what I learned. First, tech innovation does not occur in isolation; it is the result of effective regional innovation ecosystems that include customers, entrepreneurs, funding sources, a high concentration of expertise and ideas, and enough of a support infrastructure to help the entrepreneurs through the early pitfalls. Each of the four accelerator programs above has done an outstanding job of helping build and then leverage their local ecosystem as an integral part of what makes each region grow.

Second, Silicon Valley is not identical to the Tech sector. Although news coverage often glosses over this fact, innovation occurs in many places across the country. I will argue below that while Silicon Valley is indeed unique in many ways, generalizations based on that unique set of circumstances can often be wrong. In the current situation, the doom and gloom based on over-priced investments there is less relevant in other parts of the country.

And so, the four cities.

Dallas – Texas has several innovation centers including both Dallas and Austin. There is a diverse industry base, with concentrations in energy, health care/life sciences and tech, significant university presence, and a good concentration of wealth. Tech Wildcatters has successfully provided leadership to the region’s startup community with special programs in both health care and tech, and most recently going to a year-round program from the more typical discrete sessions. Dallas is a vibrant startup location, although it is unclear what effect the collapse of oil prices may have on access to capital in the region.

Chicago – political issues aside, Chicago has the benefit of a high concentration of Fortune 500 Corporate Headquarters, a robust investment sector and strong University presence. TechNexus has done a masterful job first in priming the innovation ecosystem development 7 or 8 years ago, and now tapping into the innovation needs of Corporate strategic partners who are looking to early stage companies as a source of new products and ideas. If the city can recover from its social strife it is certainly positioned to continue as a significant center of tech innovation.

San Francisco – San Francisco/Silicon Valley is the undisputed investment capital of the world for tech. According to Pitchbook in the third quarter of 2015 more than 27% of all the venture capital invested globally came out of Silicon Valley. China has risen rapidly as both a source and target of VC investment, Slide2although the collapse of the economy in China seems certain to be a major setback in this area, as the graph seems to indicate starting in Q4 of 2015. New York ranks third on this list, providing just north of 8% of the globally invested capital.

Yet with all that money floating around it appears that some Silicon Valley investors may have had more dollars than sense. If you look at the number of deals and the dollar amounts as compiled by Pitchbook, the dollars invested continued to rise in 2015 even while the number of deals plummetSlide4ed, leading to a rapid rise in median valuations.

Slide1By comparison, valuations in New York during this same time were only 10% of the San Francisco valuations, an enormous disparity. Slide3There are some possible alternative explanations for this disparity (bigger opportunities, move towards later stage investments, etc), but both the anecdotal evidence at the time (“too much money chasing too few deals” was a sentiment we heard more than once) and the subsequent down rounds of investment even for some of the high flyers indicates over-valuation on the part of investors was at least one primary cause of the disparity.

A second point. Why on earth would you want to locate and operate a company in the outrageously expensive environs of San Francisco where none of your employees can afford to live? ST AptsOr Palo Alto, where Palantir is driving out start-ups by snapping up office space at high rents. Well there are certainly some reasons: if you want to hang with the cool kids, California is the place you ought to be. If you need to raise a billion dollars or so, where else would you go? And certainly if you want frothy valuations during the good times, the target destination is clear.

A recent Harvard Business School study ( hinted at one possible evolution of this trend. According to the study:

“Venture capital firms based in locales that are venture capital centers outperform… [as a result of] outsized performance outside of the …firms’ office locations…”

That is, if you are a VC you want to be in one of the centers of VC activity because there is a strong ecosystem of investors…but, the big returns are to be found by investing in other places. Certainly Silicon Valley is not going away as the primary center of activity. Increasingly however, those investors seem to be syndicating with other groups in places such as Dallas, Chicago or…

Washington DC – The region centered around Washington DC is generally considered to include Maryland, Virginia (or at least Northern Virginia), and DC itself. The Federal Government is a large presence, along with some of the specialty areas such as cybersecurity and data analytics it has helped develop. Health care/life sciences is also a major player in the area, and there are multiple world-class universities that support the ecosystem. The region generally ranks in the Top 10 innovation areas of the country, and the area’s capital investments are growing, actually increasing in the 4th quarter of 2015 even while investments were declining nationally. One reason for this increase is the growth in cybersecurity, with the potential for more than a billion dollars in cybersecurity investments in the region in 2016. The two biggest areas were health care/bio and software (including cyber), and there is an organized, active ecosystem working to promote the growth of these and other industry sectors.

Conclusions – Clearly the stock market is in correction territory, driven initially by economic issues in China and the energy sector. While the tech sector also appears under pressure, the fundamentals here are very different. In the short term, what appears to be a broad retrenchment in the sector is actually mostly a correction of inflated valuations on the West Coast that are not indicative of the sector as a whole. As Rick Gordon, Managing Partner of the MACH37 Cybersecurity Accelerator puts it: “while Silicon Valley has been out on the great unicorn hunt, we have been building an army of cockroaches…small, fast, nimble, designed to survive a nuclear winter, and available at a reasonable price.”

The age of easy money from building the next mobile app may be behind us, but the advent of autonomous vehicles, personalized medicine, data-driven everything and more will ensure that the tech sector will continue to drive the next wave of innovation and economic growth for decades to come. But it is increasingly likely that the actual innovations will be found in places like Dallas, Chicago and the Washington region even if the investment capital still flows from New York and Silicon Valley.

CSAM Industry Vertical: Advanced Manufacturing

When most people think of cybersecurity, they think of IT departments protecting corporate networks, or individuals at home on their personal computers. But cybersecurity is differentiating rapidly as more people realize its actual goal is to improve the reliability of some other business process or product, and not an end in itself. Since these business processes vary widely from one industry to another it makes sense to talk about the unique issues and approaches faced by individual market verticals. Today: Advanced Manufacturing.

Manufacturing jobs are increasingly high-tech jobs, and amazingly enough (in spite of our images of automobile assembly plants) the vast majority of these jobs are to be found in very small organizations. Something like 70% of manufacturing jobs are in fact found in companies with fewer than 100 employees, and 98% in companies with fewer than 500. Furthermore, the advanced manufacturing industry is on the verge of content and structural changes perhaps unlike anything since the advent of the assembly line itself.

The factory of the future consists of fully interconnected machinery operating semi-autonomously, with the role of humans relegated more to monitoring the operations for signs of trouble than operating the equipment directly. GE for example now monitors 10 million sensors a day from more than $1 trillion of equipment, GE gas turbinemeaning they have transformed from a manufacturer/lender to a manufacturer/big data company. Designs are electronic, and product designers more intimately connected to the manufacturing processes themselves. In some places in China, small design-build-sell groups can run small batches of sample products, sell them immediately on the street, and receive direct customer feedback in real-time, allowing almost unimaginably rapid product development through iteration.

Additive manufacturing (including 3D printing) is leading a more radical structural change, as manufacturing capacity is increasingly distributed and responsive to local needs of customers, larger assembly plants or other imperatives. For larger-scale products such as automobiles this is leading to renewed local clustering of supply chains, while at the same time allowing for much more decentralized production of smaller-scale products, with implications that ripple through the logistics and shipping industries as well as impacting job location and migration. One result, for example, has been the “re-shoring” of manufacturing to the United States as changes in cost structures make transportation costs and close collaboration more dominant priorities than finding the least expensive labor markets.

Cybersecurity is an emerging concern in this new world of manufacturing, for several reasons. The most obvious is that putting expensive capital machinery online exposes a manufacturer to attacks in ways that they are not accustomed to dealing with. Security professionals in these organizations may come from a more traditional physical security background or have more generalized IT expertise than, say, the Government or financial industries.

But there’s more. Much of the intellectual property of manufacturers, often contained more in processes than in the products per se, are now exposed and are a high priority of economic attacks. The threat of tampering with product design files, resulting in defective products, is suddenly real. Attacks against less sophisticated component manufacturers further down long, integrated supply chains as a way of reaching the larger top-level manufacturers is now a well-known tactic. And with an increasing number of products (Internet of Things; see Oct 20 post) containing connected sensors and computing elements, attackers also have the ability to target these components. For the Federal Government there are activities afoot that would require manufacturers to certify their products as free from known vulnerabilities or else be considered defective.

This rapid industry change and adoption of 21st century technology is already a revolution in progress. As an information security community we are actively looking for ways to adapt the lessons learned and the leading edge techniques developed in other places to help ensure that our critical manufacturing infrastructure remains a vibrant source of growth and opportunity for decades to come.

CSAM Back to Basics: Internet of Things (IoT)

As with the cloud, where virtualization was the key to unlocking the potential, the Internet of Things was also unlocked by a common denominator technology: the drastically falling price of sensors. Sensor costSure, the “Things” of IoT are networked and have computing power at the nodes, but so too does the plain old internet (PoI). What makes IoT different from what came before is the ubiquity of sensors of all types.

One of the first embodiments of IoT in my book was the iPhone 4s (although some of its features had been previewed in the iPhone 3GS). Of course many claims could be made for the first “thing”, but the addition of Siri voice recognition and response, and a more capable chipset make the 4s as good a candidate as any. With that addition the 4s enjoyed a list of sensors including: 3-axis accelerometer, compass, up/down orientation sensor, voice recognition, touch screen, 4 RF sensors (receivers – cellular, bluetooth, GPS, and of all things, FM radio), a light sensor, 2 cameras, the plug controller for both electricity and data…and no doubt more. We tend to think of phones in other categories, but as a sensor platform they are unparalleled.

IoT quantities

Most estimates are now calling for upwards of 20 billion IoT devices by 2020 (which BTW necessitates IPv6 to provide addresses for all those devices…see Oct 9 post for more on IPv6). A the chart indicates, these networked sensors will be everywhere, and in every sector. Since the cybersecurity issues with IoT things are essentially the same as general cybersecurity concerns, one might have hoped that the introduction of IoT would be more security-conscious. Alas, in the rush to get devices to market, we are now seeing the same set of issues with IoT that we have seen for so long in enterprise networks. Security flawsThe chart at right represents one of a number of studies on devices on the market in 2014/2015, and essentially no devices tested were secure from all of the most common vulnerabilities. Worse yet, many components of our critical infrastructure, including airlines, automobiles, and the power grid are now being updated both to bring components onto the network as well as introduce IoT-type sensors for better control…and increased vulnerability.

So what would it take to secure the Internet of Things? Basically the same set of things required for the internet at large…don’t build security flaws into the software, make sure the devices are continuously updated, keep these devices separated from your other operational networks, control access and data flow. Of course IoT is worse than the PoI, not better, due to the many new vendors and users who are less familiar with the risks than the traditional IT security professionals.

The current state of understanding of IoT security is the new ISACA 2015 IT Risk/Reward Barometer, which surveys 7,000 ISACA members and 5,400 consumers from various countries to understand their concerns. Almost three-quarters (73%) of the security professionals believe their business is at a medium to high risk of being hacked via an IoT device, and the same number believe IoT security standards are not adequate. Consumers want more, but are again woefully unprepared for managing all their devices (most underestimate the current number of devices in their household by about 50%) or the data losses likely to result.

The regulatory side is also in flux. The Federal Trade Commission is taking action, seemingly looking towards more broad-based privacy protections rather than anything specific to IoT. Key industry leaders are leery of intrusive legislation or regulation, and the new IoT Congressional Caucus seems to be primarily focused on bringing Congressional colleagues up to speed on how to spell IoT, and the implications. Unfortunately, there are clearly some misguided regulatory efforts. A draft bill in front of the House Energy and Commerce Committee would create fines up to $100,000 for unauthorized access to motor vehicle systems, even if you own the car. This is the latest salvo from the automobile industry, unbeknownst to most consumers, looking to establish that you don’t really own the software that runs your car, or even have rights to inspect or modify it. While this is especially egregious in light of the recent Volkswagen scandal involving software that defeated regulatory compliance, the automobile industry is certainly well-funded and persistent.

IoT will certainly become a pervasive fact of life over the next 5 years. It poses yet another daunting round of technical, legal, privacy and regulatory hurdles that will take years to sort out. As a consumer, your options will sound familiar: be aware of what devices you have and allow into your environment; for home use, control what you allow on your network; be careful what information you allow to be shared, and with whom. But there are more options here as well, given the state-of-play of IoT. You can choose to help influence the debate beginning in Congress on regulating this industry, or on the FTC rule-making regarding your privacy. You could choose to become involved in advocacy groups like EFF or others who are active in these discussions. We do have a voice in shaping our technology future.


Future Tense

Language matters; the names we give things influence how we think about them.

This realization is not new, but became apparent once again thinking about the vision for the future of First Responders as part of the DHS EMERGE program. If you are thinking 3 to 5 years in the future, the descriptions tend to be of the flavor “just like now, only different” as a way of helping people understand imminent change. Thus the classic “horseless carriages” are just like the carriages we know now, only without the horse (and the role of draught horses has never been the same). “Driverless cars” are just like those we have now, only you don’t need to drive them, and autonomous vehicles just like any vehicle, only operating by themselves. This transitional language is meant to soothe the fearful.

But if you’re thinking 20 years into the future, transitional language is inadequate. Think back 20 years…HTML and the World Wide Web had just been invented, smart phones and Facebook and the Y2K crisis and revolutions fueled by social media still long into the future. We can’t imagine life now without the internet…20 years ago we couldn’t imagine life with it.

DHS EMERGE is looking at wearable technology, and we spent some time trying to define what constitutes a “wearable”. In the spirit of moving past transitional language, the obvious question became “what does a wearable look like if you stop assuming a wearer?” And why do we send firefighters into unbearably hot, dangerous situations where visibility and communications are impaired and the limits of human endurance are never far away? Couldn’t robots do some of that? Maybe robot swarms, some fighting the fire, some carrying away the injured, some sniffing for dangerous chemicals nearby…if a firefighter controls the swarm behavior with, say, a haptic gesture device, is that a wearable too? And what is the role of the human? What would the robots call it? Maybe a human-mediated ecosystem, since the key role for the human in this scenario is making key decisions in real time, whereas mostly the robots would be on their own.

And how close do you have to be in space for a system to be a wearable? Some experimental prostheses now can be controlled directly by nerve signals from the human wearer. One experiment inserted a small radio transmitter in the loop and placed the prosthesis across the room in an effort to explore missing limb syndrome. How close is close enough? Of course you can’t think about the future without consulting William Gibson… in this case his new book The Peripheral explores the same type of idea, only the distance is across both space and time.

Gibson also channels the future through a Law Enforcement officer named Lowbeer. A few quotes from a scene in which Lowbeer is interacting with several humans and a rented avatar being controlled by a person of interest in Canada.

[speaking to the remote operator of the rented avatar]: “I am Inspector Ainsley Lowbeer of the Metropolitan Police. You do understand you are present here, legally, under the Android Avatar Act? …Canadian law makes certain distinctions, around physically manifested telepresence, which we do not.”


“Someone my age is all feeds…For my sins, I’ve continual access to most things, resulting in a terrible habit of behaving as if I already know everyone I meet.”


[dialogue between other characters after Lowbeer leaves]: “[She] sampled our…DNA” [he said] examining the palm of the hand that had shaken Lowbeer’s. “Of course she did…how [else] could she be positive we’re who we claim to be?” “We could bloody sample hers”…”And be renditioned…”


So who is legally responsible for damage done by a drone operated from outside the legal jurisdiction? Is DNA-based identity really the future, and what skills will an Officer need for that? And of course who gets access to what informaiton and how do you protect it?

Once you change the language and remove the transitional framework of the words, the opportunities for the future and for the innovations to get us there, open much wider. Of course this whole realization had the bad taste to impinge upon me in that half awake, half asleep stupor after reading several chapters of Gibson on a flight to Atlanta…as a poem. I don’t normally do poetry, but thought this one was pretty good, so here it is. I’ve dubbed the genre technoslash.

Terminal Terminology

Horseless carriage? Mourn
The late forgotten draught
Self-driving car? Gone
Captain of the long and winding road
Autonomous vehicle? Certainty
In 20 years they all must be
Transport. Or like automobile become car, simply Go.

Wearables. Assume the wearer within.
Sensors, actuators, robotic swarms
To them perhaps
Human-mediated systems
And now remote. Telepresence.
How close proxim to wearer is the worn?