Author: Daveknology

About Daveknology

Innovators think outside the box. Buddhists think the line between inside and outside is fuzzy. Theistic Existentialists believe that God has refused to create boxes

Why Engineers Love the Smart City Works Actuator

So now it’s real! A fantastic Ribbon-cutting and Meet the Cohort event last Friday the 14th for the new Smart City Works Actuator at CIT, next door to our enormo2017-04-14 - SCW - 39 - DSC_9505usly successful and now four-year old cybersecurity accelerator, MACH37 (who also graciously hosted the event). The Governor came to get the 100 or so guests pumped up and glad to be Virginians. Thomas Smith, the Executive Director of the American Society of Civil Engineers spoke about our failing infrastructure and how the Smart City Actuator could play a role 2017-04-14 - SCW - 37 - DSC_9387in helping renew it. There actually was a ribbon, and the Governor was decisive in cutting it (look at the lever arms on those scissors!). And, in addition to civil engineers, we had electrical, mechanical, 2017-04-14 - SCW - 03 - DSC_9517transportation, and an aerospace engineer, computer scientists and data scientists, a materials scientist or two (graphene of course), and probably more. So why do all sorts of engineers love the Smart City Works Actuator? We can turn to the Laws of Physics for answers. Two laws that every engineer learns apply here:

F=ma, where a of course is acceleration,

and the formula for Kinetic energy (energy in action)

Kε=½m(v)**2

Now for our purposes we will let m represent the size of the mentor network, and v represent the volume of innovative companies the accelerator capacity can handle. By starting the Smart City Works Actuator, a has now become 2a, m has become 2m, and v is of course 2v. Substituting in our equations, and letting F represent the amount of fun we are having, any engineer can tell you the results:

2a*2m = 4F …four times the fun!

and

½[2m](2v)**2= 4Kε …four times the energy!!

Yes, its true. Engineers love the Smart City Works Actuator because, together with our MACH37 Accelerator, they can come and have four times the fun and experience four times the energy, all while helping build a better world. Q.E.D.

Of course the way we help Actuate a better world is by helping accelerate our innovative entrepreneurs, and the Smart City Works Actuator has some great ones!

IHT.   You no longer need to be a scientist to know whether your water is 2017-04-14 - SCW - 20 - DSC_9590safe. Using a patented new technology, Integrated Health Technologies’ Sensor BottleTM detects and relays water quality information to your phone to provide you with real-time peace-of-mind that the water you consume is safe to drink.  For cities, these bottles provide a crowd-sourced platform for real-time water quality detection and monitoring of municipal water systems.

UNOMICEDGE.  UNOMICEDGE is a Software Defined Network solution for securely 2017-04-14 - SCW - 51 - DSC_9580connecting the Cloud to devices at Network Edge.  It includes a network Hypervisor that not only enforces network security policies, but develops critical business and operational insights from user and device interactions. Smart cities rely on smart IoT devices at the Network Edge.  UnomicEdge not only reduces the cyber risk of IoT, but can provide valuable intelligence to make businesses and cities run smarter.

InfraccessInfraccess is powering up infrastructure investment by pr2017-04-14 - SCW - 24 - DSC_9570oviding easier access to trusted data so you can more efficiently discover investment opportunities, make quicker, better informed investments, and reduce overall investment risk. The Infraccess web-based workflow platform sources and transforms unstructured information into smart data and proprietary performance indicators to help unlock billions in investment opportunities in infrastructure.

Capital Construction Solutions.  Capital Construction Solutions creates mobile-based 2017-04-14 - SCW - 30 - DSC_9533risk management platforms for improving enterprise-wide accountability and transparency.  With Capital Construction Solutions deployed in the field, companies can immediately turn day-to-day operations into opportunities to reduce corporate liability, mitigate risk, and significantly increase profits.

 

 

PLANITIMPACT.   Design decisions can have significant and long-lasting2017-04-14 - SCW - 27 - DSC_9545 impact on business and environmental costs.  PlanITimpact has created a smart modeling platform to help building professionals better understand and improve performance, including energy, water use, stormwater and transportation, so owners, investors, and communities can better visualize project impacts and returns on investment.

GREATER PLACES.  Cities worldwide are investing in the next generation of buildings, infrastructure, transportation, and technology. But where can you turn to for readily finding the b2017-04-14 - SCW - 28 - DSC_9542est leading-edge solutions in this space?   GreaterPlaces creates a single web-based and mobile platform for bringing together the best ideas, inspirations, and practices for designing and governing cities—a marketplace and tools to connect people seeking ideas, products and services to transform cities worldwide.

Come join them and see what you’re missing!

 

All photos courtesy of Dan Woolley

 

How Do We Know the Actuator is Working? Part 4 – Synthesis and Policy

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

Time to pull together the thoughts and data from the previous three posts. Three sections here: how do we know the Actuator is working; are there ways to improve the commercialization success of invention organizations such as Universities and National Labs; and, are there ways to improve the outcomes of the national R&D enterprise

The Actuator

OK, back to the original question..how do we know the Actuator is working? As a participant you will know fairly quickly how it is working for you, once you learn to smooth out the day-to-day highs and lows of being an entrepreneur. However overall at a portfolio level we also carefully track performance metrics against industry norms, and our performance here is very strong. We use these metrics to fine-tune the program content and focus and inform specific mentoring actions; this continued engagement over the longer term is a strong signal that the Actuator continues to work for you. In addition to investment metricReference model fulls, we will also track performance in terms of various paths to market that may or may not involve direct financial investment, such as revenue growth and job creation of the companies we mentor.

But those metrics for a portfolio can take 5-7 years to fully mature. In the intermediate term we track leading indicators of later success. Some of these include pilot opportunities, early customer adoption and similar measures of market traction, even things like press coverage. Where needed, we also use these metrics to indicate additional areas where the Actuator can provide ongoing support to our graduates as they mature their businesses.

For the short term the primary assurances are the combined experience of our CIT and Smart City Works staff in the specific market verticals we are addressing, our extensive direct experience in early stage investing, a deep understanding of accelerators and best practices about what it takes to help early stage companies, and the strength of our community of mentors and experts. As an Actuator entrepreneur you should experience all of these, and they are your clue that the Actuator is indeed working.

Inventor Organizations

In this category I would include organizations like Universities, National Laboratories, Government development organizations or programs and the like. Obviously not all of these organizations, and obviously not every one to the same degree, but the generalization here is that these researchers look first to develop the best technology, then only later think about possibilities for commercialization. In some ways this, along with our strong national basic research capacity, has been the jewel in the crown of American global competitiveness for decades. But as budgets have consistently tightened and questions about Return on our research Investment have grown, this open-loop system that grew up in the aftermath of World War II may need some tweaking.

I would proffer three possible fallacies in this development approach in today’s environment. First is the belief that the quality of the technology is what drives the success of commercialization efforts. We evaluate a lot of companies for potential investment, and a common rule of thumb is that about 50% of an investment decision is made on the basis of the entrepreneurial team, perhaps 30% on the market dynamics (size, competition, path to market opportunities) and only the remaining 20% or so on the technology itself.

A second fallacy is that the researchers or developers know what the market wants; they are as a group incredibly smart and talented people who have relied on their judgement for success throughout their career. Like our entrepreneurs, they are almost always wrong with their first guess on what the market wants. This is why most companies in the early stages of development “pivot”, meaning substantially change something in their original product concept. One of the reasons that commercial markets are so successful is their relentless, continuous pressure to deliver, deliver ever better products, and deliver only what the market will pay for. Responding to this pressure is what makes companies continuously improve, and developing technologies in isolation from this pressure only delays the inevitable reckoning.

The third fallacy, somewhat related to the first two, is that what researchers and developers do is “innovation”, and innovation is what the market wanInnovationts. Jon Gertner in his great book The Idea Factory, about the operation of Bell Laboratories during the development of our national telecommunications network indicates the Bell Labs working definition, shown in the Figure.

What researchers and developers do is often Invention by this definition, but Innovation is really the end result of what we now call commercialization activities. Perhaps markets do want innovation, but it is important to be clear about what that means.

Is there a way to address these issues and improve the innovation outcomes for these Inventor Organizations? I believe the answer is “yes”. We are now exploring ways to connect our commercialization expertise directly to the research, inventions and entrepreneurs within these Innovation organizations. Demonstrating success in valuing IP, in business models that appropriately share the positive outcomes of commercialization, and in partnerships that overcome the biases that each side brings may well help improve the ROI for our Invention community.

Federal Government

The Federal government and its interactions with the R&D community may be in need of the biggest update. Many people point to the very cumbersome Federal Acquisition Regulations (FAR) as a road block to innovation. In fact, our experience is that the government probably has most of the legal authorities and mechanisms it needs to be much more effective as an R&D enterprise, but long-standing practices and cultural norms are really a much larger impediment.RandD spend corp gov

One issue is that the Government in many ways still acts as though it is 1950 when Government R&D spending was the dominant source of funding and the Government was large enough to constitute the primary market for innovative companies. This is no longer true, and in fact the relative market positions of the Government and commercial worlds have essentially reversed. The commercial world now spends twice as much on R&D as the Government, and represents a much larger market for innovative companies with more rapid paths to success.

A second recurring issue is Intellectual Property. Government encumbrance of small company IP in exchange for $50K or $100K development contracts makes those companies essentially uninvestable. Yet there are mechanisms in the Government contracting arsenal that do not require this encumbrance, and the value to the Government of locking up IP at such an early stage is minimal at best. So why does this practice persist?

Finally there is the structural problem. In the commercial world a path to market is critical. In the Government market, development support dries up around the SBIR Phase III point (working prototypes at some degree of maturity), followed by limited transition support to the uncertain market of large procurement programs. Why an uncertain market? Government program managers are incentivized to be risk averse, and new technology is almost never operationally robust when it is first introduced. The path to market for these large programs is most often through big systems integrators, and this is inherently risky for that precious IP. And, Government procurements are notorious for delays in awards, changes in scope and similar vagaries that can put a small company out of business long before a contract is ever awarded.

Here too there are ways to improve these outcomes. Certainly more support for transition programs that take interesting prototypes and help mature them would be a step in the right direction. The Governmenthas numerous test and evaluation capabilities that could be appropriately harnessed for this purpose, well within the limits of current contracting comfort zones. Adoption of more commercial-like practices such as those employed by some successful Government programs (In-Q-Tel, SBIR for example) can help get early market feedback and sufficient market competitive pressure to foster continuous evolution of interesting ideas. Increased use of staged awards such as SBIR, where only Phase I recipients are eligible for Phase II and so forth would help level the playing field for small companies, instead of so much of the innovation dollars going to incumbents working to develop ideas in-house with only limited external review and pressure.

There are others. NASA has placed much of its software in the open source domain, providing both valuable initial IP to innovators as well as fostering increased interaction between NASA and the innovation community. Our EMERGE program with DHS adopted a “commercial-first” approach, transitioning commercial technology into Government uses instead of trying to push Government-developed technology out.

Even the Chinese might provide an interesting model. Their “Made in China by 2025” initiative may sound like industrial policy, but seems to rely on commercial development of commercially viable products within broad sector definitions established by the Government. The implied quid pro quo is that the Chinese Government will then buy products from the best of those commercial companies.

 

So there it is, the 4 Part series on the Actuator, Innovation, and the various sectors of our economy that provide innovation. Improving success in this arena is indeed a wicked problem but there is room for substantial improvement simply by thinking about our collective goals and improving some of our innovation processes. Both our commercial and our national interests may be at stake.

Next Post: Smart City Actuator Focus Areas – Transportation

How Do We Know the Actuator is Working? Part 3 – Corporate/University Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

Parts 1 and 2 of this series looked at methods for measuring the success of commercial and government programs for accelerating innovation. The other major sources of innovation in the U.S. economy are Corporate R&D efforts and University programs. How effective are these sources for driving innovation? It is hard to generalize, given the very large number of both Corporate efforts and University programs and the paucity of data, although the anecdotal data is mixed to negative.

It does seem as though our model of funding leverage and transition probabilities could be adapted to measure success here as well. For this analysis however, alternate data sources provide some quantitative clues that support the anecdotal data. On the Corporate side there are certainly large company innovation success stories. Apple for example has certainly had success with their early mass market personal computer, photo_mac84pand more recently with their mobile devices (iPhone, iPad). One of the more amazing companies in this regard has to be General Electric who, over the last 50 years (based on where their profits originate), has gone from a heavy equipment manufacturer to a financial services company to the current big data company that also happens to be one of the world’s leading manufacturers…and always as a market leader.

But the anecdotal and other data tell a different story. Many accelerator organizations around the country conduct essentially consulting engagements with large Corporations to help them engage tFailureshe innovation ecosystem. Data show that 86% of the original 1955 Fortune 500 companies no longer exist, and that 50% of the Fortune 500 in the year 2000 have also failed. The average life expectancy of all companies is only 12.5 years, and the failure rate seems to be increasing. According to Forbes, the reason “Why U.S. Firms Are Dying: Failure to Innovate

One solution might be for Corporations to invest more in R&D. Alas. <re/code> reports on a clever study from Bernstein Research analyst Toni Sacconaghi that looked at “historical R&D spending figures as a percentage of sales for publicly traded tech companies with market caps of $2 billion”Large Cap Stocks, then tracked their stock performance 5 years later when presumably the research might have paid off. The chart summarizes his data, and concludes that there is “no meaningful relationship between R&D spending and stock performance”. I had a hard time deciphering the chart, so decided to put it in a more graphical form, and insert a few trend lines. The results were actually worse than SaccoCorporate Optimum R&amp;Dnaghi was willing to conclude. Not only is the regression line negative…the more you spend the worse your results…the standard deviation decreases with spending, meaning that the likelihood of a poor result is much higher the more you spend. Looking at this chart as a non-financial person, it is not clear to me why anybody would invest in companies spending more than 20% of sales on R&D. As a side note, something really interesting seems to be happening around a magical number of 7-8% R&D spending, but I haven’t a clue what that might be…ideas welcome!

How about Universities? Many of them have Tech Transfer offices for their research-derived intellectual property (IP), and again there are certainly success stories in MIT/Harvard, Carnegie-Mellon, Stanford and others. Anecdotally however these offices, many based on a traditional licensing model, have overall not been incredibly successful. Two reasons are often given: first, IP transfers much more effectively via people than via paper so licensing models without researcher support are not effective; and, second, some Tech Transfer offices like to value the IP on the basis of research dollars spent, not on market value. As one of my colleagues put it “when I have these discussions the Tech Transfer Office will often tell me the value of the IP is $10M or something; I tell them it’s worth zero unless somebody takes it to market, and we negotiate from there.” Fortunately some of the more forward-looking Universities are starting to migrate towards a shared risk/reward model where the value of IP is set by marketplace outcomes.

Is there an explanation for this phenomenon? One possible answer lies in where the funding for research comes from, and where it goes. Again an obscure chart, this one from NSF.Funding sources After staring at this one for a while what I really wanted to know was net funding flows. For example in Basic Research the Federal Government funds about 60% and only conducts about 10%…where does the other 50% go? Mostly to Universities, which fund 10% and use more than 50% of the Basic Research dollars. OK, easy enough to turn into a graph, where the nodes are the sources, scaled to size by percentage Research fundingof funds sourced, and the graph edges are the net flows, also scaled to size. By applying some awesome powerpoint skills and a garish color scheme, the Basic Research picture looks like this. Half of all research dollars go from the Federal Government to University-based research, with a small additional amount funded by Industry, totaling about 2/3 of all research performed.

Now applying the same analysis to the Development portions of the NSF chart yields the following.Development funding Here, almost ninety percent of all Product Development activity is funded and performed by Industry, with some support from the Federal Government, while Universities are almost invisible. No wonder there is a bit of a disconnect; Universities apparently are not at all focused on commercializing their research, if the funding numbers are to be believed. One last chart provides the summary. Looking at the per capita number of startups based on University-developed IP the numbers have been dropping for a while. More to the point, the numbers are low. For Virginia, for example, these numbers equate to about 20 startups per year. Our cybersecurity accelerator, MACH37, evaluates more than 100 companper capita unievrsity startupsies per year to select a dozen participants just in the area of cybersecurity. Numbers are similar for most venture investors, with only single digit percentages of the number of deals reviewed actually resulting in investment. Thus for Virginia this may equate to one or two investable companies per year, based on University-generated IP. To be fair, this probably underestimates the amount of entrepreneurial activity generated by University programs and proximity, but is probably reasonably accurate in terms of the Institutional success, based at least on the anecdotal evidence.

It is clear now, having looked at innovation and technology transfer across commercial accelerators, the Federal Government, Corporations and Universities that successful innovation is one of those “wicked problems.” While there are successes across each of these domains, there are no magic bullets, no guaranteed approaches for innovation. So, how do we know our Smart City Actuator is working? And, are there ways to make this entire national research enterprise more efficient? We will explore those questions in Part 4 of this series.

Next (Thursday 3/23): How do we know the Actuator is Working? Part 4 – Synthesis and Policy

How Do We Know the Actuator is Working? Part 2 – Government Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

In the last post we looked at commercial accelerator/investment programs and presented a methodology and results allowing more or less direct comparison of outcomes for these programs. The original study was funded by the Federal Government however to look at how these commercial approaches compared to government innovation programs.

slide3

There are a number of well-known innovation activities across the Federal Government, including the Small Business Administration’s SBIR program (Small Business Innovation Research…check out CIT’s own Robert Brooke as an SBIR 2016 Tibbetts Award Winner!), the Defense Advanced Research Projects Agency (DARPA) and its programs, In-Q-Tel which is the intelligence community program that uses commercial-style practices to encourage government-relevant innovation, and others in the various military services, national laboratories, contractor activities and elsewhere.

Access to data on success rates and outcomes is as fragmented in the government domain as it is in the commercial world, in some cases compounded by classified work or other policies specifically designed to limit information access. But as in the commercial world, the best-known and most successful Government programs are not shy about sharing their results. Special commendation goes to the SBIR program and In-Q-Tel, both of whose data sets have proven invaluable.

The SBIR program identifies three phases of development, with Phase I generally less than 6 months and $150,000, Phase II generally less than $1,000,000 and 2 years, and Phase III externally (non-SBIR) funded, providing a reasonable basis of comparison in our model. The SBIR program publishes good data on the percentage of proposals and the amounts awarded at each Phase, allowing for a robust analysis, although the Government Accountability Office (GAO) did find that the data are insufficient to determine DoD SBIR transition success from Phase II to Phase III. One additional twist is that the Navy instituted a program from 2000 to 2015 called the Transition Assistance Program (TAP) to provide mentoring support to these early stage researchers, and that data is also available in at least one study looking at the period from 2005 to 2008.

DARPA was a bit of a surprise. When the GAO tried to assess the transition performance of DARPA projects they concluded that “inconsistencies in how the agency defines and assesses its transition outcomes preclude GAO from reliably reporting on transition performance across DARPA’s portfolio of 150 programs that were successfully completed between fiscal years 2010 and 2014.” Another study puts DARPA success rate at about 3-5 products to market per year over 40 years, which the authors characterize as “quite impressive.” Measuring transition success is clearly not a priority.

In-Q-Tel data were a bit harder to come by, but here we were able to use two sources: their published number on downstream funding leverage, and a calculated number based on external data about the size of the In-Q-Tel portfolio and additional published fundiReference model fullng events. Thus we were able to calculate a performance number and compare it to the published number, again as a check on the validity of the model. All of these results are shown in the Figure. The In-Q-Tel (IQT) data show reasonable correlation between published and calculated numbers depending where IQT falls on the investment spectrum, and also shows that the best Government programs perform in line with the best of the commercial programs.

What about the rest? A couple things seem clear. First, there is much more emphasis on activity than on outcomes in the Government R&D space…how many programs are funded versus how many of those funded programs succeed in eventually deploying to users. Given the rapid rate of change in technology and the fact that our national strategic competitors are looking very hard for strategic advantage, it is certainly in the U.S. national interest to have a robust scientific community actively researching a large number of areas of interest. In this domain, activity rather than outcomes may in fact be the right metric. Some of the focus on activity is also driven by the Government budget cycle process, and certainly if outcomes are not reliably known for 4-7 years as in the commercial world, this is beyond the next election cycle for most elected officials.

But in that subset where transition to users is important, perhaps even a stated goal, the Government programs seem to struggle. The fact that GAO could not determine transition success rates for either SBIR Phase III or DARPA is one indicator. Plenty of literature speaks of the “Valley of Death” in the Government world, where inventions go to die before ever being deployed.

Among other issues, there are structural reasons for this. The “market” for Government transition is generally Programs of Record, those big, often billion-dollar programs. Those programs run on an entirely different set of principles than the Government R&D world, a set of principles where risk reduction rules the day and innovation may not even be welcome. So most Government R&D programs and national laboratories now have “technology transition” programs or offices, looking to commercialize all those great inventions that have been developed along the way, in some case with associated patents.

The standard model for these efforts has been to look at the outcomes of the early stage R&D process and license the intellectual property, or try and find an entrepreneur who will take it on, or encourage the inventor to become an entrepreneur. Two problems plague this approach: intellectual property transfers much more effectively via people than via paper; and, inventions created and prototyped without the market discipline of external investors and users determining value are almost always poorly optimized for the eventual market they hope to serve.

The programs that have done best at this are those that adopt the most “commercial-like” practices: listen to your customers and end users, get early feedback, understand the needs, understand the price points, worry about transition to market. When GAO looked at a set of DARPA case studies, they summarized it this way.DARPA Factors for Success

The good news is that the Smart Cities Actuator instills the commercial version of exactly this set of principles. While the Federal government can focus significant investment on technology development, it seems that the best Government programs are about as good as the best commercial programs. The difference is not the amount of money but the set of practices that make transition effective.

Next (Monday 3/20): How do we know the Actuator is Working? Part 3 – Corporate/University Programs

How Do We Know the Actuator is Working? Part 1 – Commercial Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

These days, when interviewing ventures for participation in our acceleration programs we typically get asked some variation on that question…what is your success rate? How do we know you will deliver what you promise? Like most reputable programs, CIT has a long track record of successful outcomes across a large portfolio of companies, and we are justifiably proud of that history. We will cite, as do others, some of the statistics on downstream funding and percentage of successful graduates and, along with a few anecdotes of the success stories, that usually answers the question.

But, while there are several listings of many of the accelerator and similar programs, it has been difficult to do direct comparisons of outcomes. For one thing, only a few programs such as TechStars and Y-Combinator publish fairly complete data sets; other data tends to be fragmentary. It is also true that different programs have different objectives, so no single metric is likely to be applicable across the board. Furthermore, how do you compare the outcomes of very early stage programs with those of, for example, A-round institutional investors where the success rates, returns and institutional imperatives are all different?

So, does the accelerator program you are in make a difference? The short answer is yes; the outcomes vary widely. The longer answer gets a bit wonky, so stick with me here. The results, both in this post and the next ones, point to some fairly significant findings as they relate to our national R&D enterprise.

We got the opportunity to study this in more detail during the DHS-funded EMERGE program, initially under a research grant from the USAF Academy. For this research grant part of the experimental design included standing up the EMERGE accelerator as a way to evaluate its effectiveness in technology innovation for the Government. The initial target market was defined as wearables for first responders. After a few months the outcomes looked quite positive, with the status charts showing lots of green indicators and very few yellow or red. Then the sponsors challenged us: those results look good, but can you prove the program is working, relative to all the other innovation activities we fund. Woo; tall order.

The key turned out to be the industry average model presented in the last post:

slide3

Would a comparison of individual portfolio performance against this yardstick provide a method for measuring success? [Spoiler alert] Apparently so. Of course any research study needs its caveats, and the big ones here are first the data were generally taken from publicly published data as of 2015, so results may have changed or may not be completely accurate; second, the analysis was at an aggregate portfolio level…a good graduate research project would be to re-do this type of analysis at an individual company level within portfolios; and third, the EMERGE data in particular were stated as being very preliminary data after only a few months of operation, whereas it generally takes 5-7 years for portfolio results to become reliable.

To start, we looked at internal programs. Using the above reference model, how did the early EMERGE data stack up against our own MACH37 cybersecurity accelerator, and against our GAP funds early stage investment program. The results were quite good (full methodology, references and data are documentedReference model CIT1 in the EMERGE 2015 Final Report). Since this original analysis the GAP fund results have continued to improve with additional later stage funding, the MACH37 results have remained steady as the portfolio continues to grow and mature, and the EMERGE results have declined somewhat but are still above the reference model benchmark results.

In fact these results were so good it looked suspicious. Could we explain how these different approaches got to similar outcome levels, or was there some flaw in the methodology? At a top level at least we were able to rationalize. MACH37 is widely known for the strength of its curriculum and the focused efforts of the partners to secure additional funding for the cohort companies. GAP is known for its highly effective due diligence and mentoring of early stage companies, with the resulting risk reduction making these companies more attractive investments for others. And because EMERGE did not make direct investments but was able to attract a lot of interest from early stage ventures, its cost per “shot on goal” was very low. Since each of these mechanisms is inherent in the model, the results at least passed the initial “sniff test”.

Next, to be sure the model could show some variations in results we added two more external accelerators where the data were available, TechStars and Y-Combinator, both well-known, highly successful, excellent programs. Surprisingly, both performed somewhat below the canonical model. One could hypothesize for TechStars that their global federation style of what are essentially franchises could produce enough geographic variability to explain those data. But Y-Combinator in particular was shocking since they are the godfather of the accelerator business and widely recognized as one of the most successful. What the heck?

Reference model CIT

A bit more research however told the tale. The published statistics for Y-Combinator at the time showed that 29% of their accelerator graduates received external funding, less than half of the industry average in the model. But, more than 7% of their accelerator graduates made it all the way to a Series B funding event, roughly twice the industry average, and with a Series B investment considered to be $40M or more, more than 4X the industry average. So, very high failure rate in the accelerator end of things, but off the charts probability of success for those who did continue on. In part this reflects their model of huge cohorts, more than 100 new companies at a time each receiving $120K initial investments…$12M investment per cohort! The accelerator essentially acts as a highly selective due diligence process, resulting in high quality deal flow for the Y-Combinator investors.

As Yael Hochberg puts it: “Y Combinator is cashing in on the name it made for itself…[t]hey’re talking about raising a multi-billion dollar late stage fund to take advantage of their model that selects great entrepreneurs rather than mold them.” [emphasis added]

This external validation finally convinced us that the methodology was fairly robust and could produce interesting results. The early stage venture program you are in does make a difference, and we continue to be very proud of the fact that CIT programs consistently perform very strongly in terms of outcomes on these types of objective measures.

The original research effort was funded to also look at the performance of Government research and development programs. How do they stack up? That is the topic of Part 2 of this post, along with some policy implications for the national R&D enterprise.

Next (Thursday 3/16): How do we know the Actuator is Working? Part 2 – Government Programs

 

 

Will the Smart Cities Actuator Make Me a Gazillionaire?

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

As they like to say in The Hunger Games, “may the odds be ever in your favor.” Meaning, that even though many people die along the way, and that your odds of hitting it big are likely smaller than you think, you never know, it’s the most exciting game in town, and what else were you planning to do with the next few years of your life anyway? Starting and running a business is sort of like that. What the actuator can do is substantially improve your odds of getting to a decent exit.

Here’s how it works. Firsfailure-ratet, the failure rate for new businesses is very high. Per the attached chart, almost 78% of all startups fail within the first 5 years…only 22% survive. The comparable number for accelerator graduates is that, after 5 years, 33% are still in business, an increased survival rate of 50% compared to the population of all new businesses.

Accelerator graduates also do maccel-funding-successuch better at fund raising. There are about 800 accelerator graduates on average per year nationally, out of a total of nearly 600,000 new businesses created per year, or roughly 1/10th of 1% of all new businesses. Yet according to Pitchbook, this small population of companies accounted for almost 33% of the Series A capital raises in 2015.

As part of my research into accelerator models I created a reference model, based on industry averages for funding at various stages of maturity, along with the likelihood that a company will move from one stage to the next.

slide3.png

For a new startup company entering a typical accelerator program (and receiving an average $100K investment to start), about 2/3 of those companies will go on to receive Angel round funding on the order of $250K. Half of those will raise a seed round of $1M, and so forth down the fund-raising line.

When you start stacking up those probabilities, here is what you get. For a single company coming out the end of the pipe with Series B funding, it takes three candidate companies who have raised a Series A round. Nine with a seed round, eighteen Angel round companies, and twenty-seven accelerator entrants needed to account for the successive attrition through the fund-raising rounds (very much in line with The Hunger Games odds, by the way!).

Now some companies don’t go through this entire process, and many of the outcomes prior to a Series B can be very nice outcomes for the entrepreneurs. But, the reality is that accelerator graduates do MUCH better than non-accelerator startup companies in terms of both survival and fund-raising, and yet even with these advantages the numbers show only about 4% of these companies go all the way to complete a Series B funding round.

One last consideration: the accelerator you’re in matters. Check out our next post for some more insight on metrics to help understand accelerator performance, and see the difference that a robust accelerator program can make.

In a perfect world, the odds will be ever in your favor, and perhaps the coveted Initial Public Offering (IPO) will occur. Home free at last! Gazillionaire, here we come!! But, life is never quite that easy. Check out this great story from the L.A. Times for some additional insights into the tricky internal company business of going through an IPO.

Next (Monday 3/13): How do we know the Actuator is Working?

What Are the Economics of an Actuator?

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

Running a successful Actuator, accelerator, incubator or similar early stage investment program requires finding the sweet spot where three sets of economics overlap: those of the early stage entrepreneurial ventures, the investors that support the ecosystem, and the actuator itself.

The economics for early stage entrepreneurial ventures is conceptually fairly gerbil-wheelstraightforward: when does the cash run out, and can I raise enough money before then to keep the company going? In the vernacular, this is the “runway”…how much runway do I have left. And the job of the early-stage CEO is almost always heavily tilted towards fundraising.

Two things make this more palatable. First, a successful accelerator will already include a number of investors in the ecosystem, and the program itself will help the entrepreneur better understand who to approach for funding, how and why. Second (the economic carrot in this plot) is the proverbial “exit.” At some point a successful early stage company starts selling enough product that somebody thinks the company has a great future, or the people are worth collaborating with, or the product is a good strategic fit.gerbil2 At that point they may buy the company, do an “acqui-hire,” or put in enough money that they bring in some new people to help run the company. Oh yes, and occasionally things can even appear so successful that they start selling stock to the public: the IPO.

Investor economics are also similarly straightforward: investors are alwaysreturns looking for good (or great) returns based on the amount of risk they take. Early stage investors make very risky investments, and so expect very good returns for their money. How risky? Well, the attached chart shows that roughly half of all venture investments lose money while only 5% generate about 1/3 of the total returns; not necessarily where people want to put the bulk of their retirement funds, for example.

On the West Coast this has in part led to the great Unicorn hunt, with big money trying to find or create the 1% of that 5% that turns into something like Facebook or the Snap IPO. But it does take big money; if 0.05% of your investments turn into unicorns, $1M investment per attempt takes $2B of investment capital. Things are a little more conservative on the East Coast, and the above chart also shows that half the returns come from investments that yield 2X to 5X the invested capital. And early seed round investments are more like $50K or $100K instead of $1M, meaning that this scales to be a feasible investment strategy for people with high risk tolerance but not quite as much investable capital as Silicon Valley.

For individual investors in this category finding a stream of suitable investments and assessing the risk of each one is a daunting challenge. However the Actuator plays an important role here as well, since a significant part of Actuator activity revolves around creating and evaluating a flow of interesting companies, and reducing the risk of those companies surviving and reaching market. This intermediary role for an accelerator in both reducing risk and matching investors and entrepreneurs is one of the characteristics that can make them so effective.

What about the Actuator economics? To understand that, these entities are best viewed as startup ventures themselves. The ones we have worked with around the country generally have operating costs in the range of $1M – $5M per year, accounting for salaries, facility and other associated operational costs. Where does that money come from? For many it comes from grants, or community or University funding, legislative appropriations or sponsorships, and these sources are critical for establishing and maintaining a program in the early stages.

But is there at least a conceptual model that would minimize or eliminate the need for these external funding sources over time? Yes, and it is very analogous to a startup gaining enough product traction to be self-sustaining on the basis of revenue generated from product sales. The “product” that accelerators “sell” is investment opportunity in early stage ventures. The quality of that product is directly tied to the quality of the incoming ventures as well as the ability of the accelerator to reduce the risk of those investments through mentoring and interaction with a robust ecosystem.accelerator-economics.png

One self-sustaining economic model uses equity investment returns of the accelerator to pay for operating costs, as shown in the figure. It takes 4-7 years or more to realize the value of these investments; the model assumes 5 years. It also assumes $2M/yr operating costs, a dozen investments of $50K per year ($600K total), and a distribution of outcomes consistent with those shown above. So after 5 years, 1/3 (four) of these companies are likely to have failed, we assume four generate 2X ($100K) returns, three generate 5X ($250K) returns, and one generates a 40X ($2M) return. Note that if your initial $50K investment was in exchange for 8% equity and no dilution occurs in between, the company value would need to be $25M for your piece to be worth $2M. As the economic model chart shows, under these somewhat optimistic assumptions (and assuming you can repeat this success year after year) the model can become self-sustaining.

While this is a challenging model, it is also helped by the fact that there are a number of secondary benefits that entice external groups to help defray some of the costs. For later stage investment groups this private source of vetted deal flow is attractive. For Universities the ties to entrepreneurship curricula make it a reasonable extension of those efforts. Opportunities for economic growth often entice legislatures, and strategic Corporate partners may see sponsorship an an inexpensive way to find strategically relevant innovative technologies.

Creating and sustaining a successful accelerator-type program requires the ability to thread the needle in a way that meets the economic imperatives of three major stakeholder groups: the entrepreneurs, the investors, and the accelerator itself. No wonder that many of these programs do not survive when the initial funding runs out. CIT has been fortunate to have both a successful investment experience with early stage ventures over many years, and much better than average outcomes in running accelerator programs directly. This proven success plus great partners, good timing, and a great environment for building business ecosystems is what will help ensure the success of our new Smart City Works Actuator.

Next (Thursday 3/9): Will the Smart Cities Actuator Make Me a Gazillionaire?

 

How Did the Smart City Actuator Originate?

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

With any activity of the magnitude of our Smart City Actuator there are numerous threads that lead to it’s creation. This post covers three threads among many, from the CIT perspective: the motivation, the mission, and the experience base that makes it possible.

The motivation is fairly simple. img_0982When I look out the window, the new Innovation Center Silver Line Metro station, two years in the making and now well along towards completion, connects via a new Innovation Avenue linking it to Rte 28 just north of Dulles Airport. It is not hard to imagine that buildings will soon replace the trees in the foreground as the property around the station is developed.

smart-city-pressThe full Silver Line extension reaches well past the airport into Loudoun County, and last summer it became public that 22 City Link, the developer of the Gramercy District station in Ashburn was intent on building a smart city. So the tactical motivation was obvious: what a great target of opportunity to build an innovation district along this Dulles corridor and perhaps beyond, connecting with similar-minded initiatives along the Silver Line and more broadly in Virginia and elsewhere.

The second thread is mission. As Virginia’s Accelerator, our CIT mission is to accelerate innovation commercialization and entrepreneurship across the Commonwealth. We do this through a number of programs, including grants (CRCF), direct investment (GAP), business accelerators (MACH37 and EMERGE), support to communities implementing new capabilities (Broadband) and others. Part of this effort involves looking forward to rapidly evolving technologies and how they will manifest in the market, and clearly the set of technologies around Internet of Things, autonomous everything, data analytics, and smart cities more broadly are an area of exceedingly rapid growth. Combined with the development happening literally on our doorstep, this was a no-brainer as a focus area that fits squarely in our mission sweet spot.

There is another aspect of our CIT mission that is also critical to this focus area: our Statewide charter. As we look more broadly across Virginia communities, many of them are interested in adopting some aspect of this technology set. Many businesses already support specific products or niches within the broader space. Virginia Universities have world-class research under way in many of these areas. And State Agencies, such as VDOT in the transportation space, are working to understand and provide leadership in this new set of technologies that will certainly change the face of many Government functions and services.

Two critical aspects for the success of a program like the Actuator are the depth of the mentoring community, and the opportunities to scale initial prototypes to a larger number of market opportunities. Our Statewide charter means that we are already aggressively pursuing partnerships with communities, businesses, State and local Government entities, and Universities to help harness this enormous set of resources. Through these collaborations we can help ensure the success of the Actuator and provide economic development opportunities throughout the region.

The third thread that led to the Smart City Works Actuator is the experience base we are fortunate to be working with to launch the Actuator. CIT of course brings ten years of direct investment and interaction with early stage companies through our various programs, along with direct accelerator experience via MACH37 and EMERGE; we know what it takes to build one, and how to make it succeed.

But the Actuator would not have been possible without the fantastic partnership we have formed with Smart City Works, and the leadership they provide. The Smart City Works team is in fact the lead operator for the Actuator with CIT support, and brings to the table the foundational knowledge in this vertical:

  • a large wealth of industry experience in construction and infrastructure that strongly complements the CIT technology base
  • large networks of contacts throughout the country that are again complementary and bring a large array of mentors, investors and other connections to the table
  • an experienced management team dedicated to the success of the Actuator and excited about the long term possibilities for smart cities

It was obvious from the very first meeting that a partnership between CIT and Smart City Works would lead to something special, and the preliminary activities to bring it to reality have only reinforced that initial impression. What you see is only the beginning!

Every now and again the stars seem to align in a way that dares you to ignore the possibilities. For the Smart City Works Actuator, the motivation, mission, experience base and timing have all aligned, and we are fortunate to be in a position to seize the opportunity.

Next (Monday 3/6): What Are the Economics of an Actuator?

What Is an Actuator?

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

In a previous post we announced the Smart City Works Actuator and described the types of technologies we will be evaluating. But, hang on a minute, what the heck is an actuator anyway? In the spirit of “practice what you preach”, one of the things we teach innovators is how to describe their business in a 3 second sound bite (in case TV news ever asks…). So here is my sound bite for an actuator: it is a combination accelerator and incubator, with built-in path-to-market opportunities. Well…ok…so what is an accelerator and an incubator, and how are they different? This chart from AlleyWatch provides a nice summary.

accel-v-incubatorTraditionally, accelerators are intensive training programs for newly formed or forming businesses, combined with some initial equity investment and extensive mentoring, and designed to position products in the market and secure early customers. Critically, the long-term success of an accelerator depends on the business success of the companies that participate.

Traditional incubators tend to be more like co-working spaces where early-stage companies of varying maturities can conduct business and collaborate with peers. Generally the success of an incubator is less directly tied to the success of the companies located there, with the implications that there is typically no seed funding or equity stake for the incubator and less of a focus on a large mentor network. More recently, these “traditional” models are converging, with accelerators providing co-working types of spaces for their graduates, and incubators more actively mentoring and even investing in a few companies using the incubator spaces.

One other distinction is important, the goals of the accelerator or incubator. Three types of programs have emerged:

  • Commissioned corporate-focused engagements designed to address specific innovation needs of large enterprises, that typically disappear when the needs are sufficiently addressed
  • University-based programs, often in conjunction with faculty research and educational programs; and
  • Independent operations that may be focused on economic development or investment returns.

As examples, Stanford and Carnegie-Mellon have well-known university-based entrepreneurship programs, as do many others. Stanford’s program is loosely associated with Y-Combinator, an independent accelerator operator focused on investment returns. TechStars is an independent accelerator operator with a large number of “franchisees” around the globe; roughly half of their business is from custom programs sponsored by specific corporate partners. In the Washington, D.C. area, AOL Fishbowl and others provide independently operated incubator space, and 1776 represents a network of such spaces. In this categorization, CIT is an independent accelerator operator (MACH37 and EMERGE) with an economic development mission.

So, back to Actuator. The Smart City Works Actuator is a combination accelerator and incubator. That is, we expect to work with both newly forming businesses and somewhat more mature early stage companies. In our other accelerator operations, CIT has found that even the more mature of these companies can usually gain significant value from the intensive training opportunities and strong mentor network associated with accelerators in the AlleyWatch figure, while startup companies that graduate from the accelerator program continue to need the extended time frames and community support more typical of incubators. The Actuator is designed to provide both. There are also differences in the funding models indicated in the AlleyWatch figure; those will be the topic of a later post in this series.

The remaining portion of the Actuator sound bite is “built-in paths to market.” One of the reasons that Corporate engagements via accelerators or incubators have become popular is the implicit path-to-market that a Corporate sponsorship implies. Corporations typically engage because they have difficulty innovating on their own, and are looking for additional products or capabilities to fill out their product lines. And in the end, many successful entrepreneurial companies are in fact acquired by Corporations; it is tempting therefore to focus primarily on this strategic path-to-market and ignore others.

But this path is not without issues of its own, and many other paths to market exist as entrepreneurial companies enter their rapid growth phase. The accelerator approach of seeking external funding to fuel growth is of course an option. In addition to these however a core component of our Actuator is engagement with a range of testbed, pilot, and early deployment opportunities. We are putting in place a set of working agreements with Virginia Universities, State Agencies, localities around the Commonwealth and private organizations such as the Gramercy District, all leading towards direct opportunities for our Actuator cohort ventures to show their capabilities functioning in the real world.

The tagline we use for this type of emergent ecosystem is Capital, Curriculum, Community. But if someone asks  you what an Actuator is, just tell them: its a combination accelerator and incubator with built-in path to market opportunities.

Next (Thursday 3/2): How did the Smart City Actuator Originate?

What Is a Smart City?

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

As CIT and Smart City Works developed our new Smart City Works Actuator, this question kept coming up from just about everyone. Some people just asked. Others knew a few reference points: I know so and so city is doing smart parking meters or smart street lights or smart trash collection…is that what you mean? Still others referenced the technology components: do you mean broadband, or Internet of Things (IoT), or cybersecurity, or autonomous vehicles? A few asked the meta questions: will this improve resilience, will this enable the surveillance state, will this improve people’s lives?

The standard web definitions were not much help. Wikipedia has: “A smart city is an urban development vision to integrate multiple information and communication technology (ICT) and Internet of Things (IoT) solutions in a secure fashion to manage a city’s assets – the city’s assets include, but are not limited to, local departments’ information systems, schools, libraries, transportation systems, hospitals, power plants, water supply networks, waste management, law enforcement, and other community services. The goal of building a smart city is to improve quality of life by using urban informatics and technology to improve the efficiency of services and meet residents’ needs.” This is helpful, and brings Quality of Life to the table, but does not provide much guidance on how to build one, or how these many and varfs_gfx_smart-cities-concepts-v1ied pieces fit together.

Sarwant Singh, based on a Frost & Sullivan
study, provides a fairly typical definition, “We identified eight key aspects that define a Smart City: smart governance, smart energy, smart building, smart mobility, smart infrastructure, smart technology, smart healthcare and smart citizen.” Lots of smarts and interdependencies, but not much structure.

So we developed otriangle-defur own definition, one based loosely on the old communications stack model, where each layer of the stack depends on services provided by the layer below it. Note in this version we have explicitly included the 22 City Link™ platform at the Link layer, since an initial implementation of this vision will be with our partners at Gramercy District where the 22 City Link™ platform is being piloted; other communities may have different Link layer implementations.

Several things stand out in this working definition:

  1. It explicitly ties technologies up the stack to the people-centric use cases around improving quality of life
  2. It provides context for things such as data collection or cybersecurity or autonomous vehicles…we don’t want to do them just because we can, but because they achieve some goal in the context of a set of infrastructures. This context also helps open up questions along the lines of: what is the proper balance between the privacy of people transiting the urban environment, and data collection for use by retailers…who owns the data, what permissions are needed, can it be re-sold, how long can it be retained, etc.
  3. For the Actuator, it will help us help innovators understand where they fit in a larger picture, which will aid them in defining the boundaries of what needs to be included in their specific product offerings. Furthermore, this provides fodder for discussions of how to scale a product. It is fantastic to be able to demonstrate a product in the friendly, custom confines of Gramercy District, but proving that a product can scale, and is thus investable, requires that it function in a wide range of built environments and infrastructure, old and new.
  4. It can be useful in early “diagnostic” discussions…for developers the discussion includes topics like “what do the buildings look like”. For communities whose vision is something like “become a smart city” but are underserved when it comes to connectivity, it provides a starting point for a longer term strategic growth plan that begins with “first, get connected”. For larger platform companies it may help make the externalities explicit for ongoing product evolution and understanding the sweet spots and limitations for existing products.

Our collective understanding of Smart Cities is evolving rapidly as the innovation process and ecosystems begin to explode. Hopefully this working definition will provide a more stable framework for understanding where and how these innovations can ultimately serve to improve our quality of life.

Next (Monday 2/27): What Is an Actuator?