How Do We Know the Actuator is Working? Part 2 – Government Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

In the last post we looked at commercial accelerator/investment programs and presented a methodology and results allowing more or less direct comparison of outcomes for these programs. The original study was funded by the Federal Government however to look at how these commercial approaches compared to government innovation programs.


There are a number of well-known innovation activities across the Federal Government, including the Small Business Administration’s SBIR program (Small Business Innovation Research…check out CIT’s own Robert Brooke as an SBIR 2016 Tibbetts Award Winner!), the Defense Advanced Research Projects Agency (DARPA) and its programs, In-Q-Tel which is the intelligence community program that uses commercial-style practices to encourage government-relevant innovation, and others in the various military services, national laboratories, contractor activities and elsewhere.

Access to data on success rates and outcomes is as fragmented in the government domain as it is in the commercial world, in some cases compounded by classified work or other policies specifically designed to limit information access. But as in the commercial world, the best-known and most successful Government programs are not shy about sharing their results. Special commendation goes to the SBIR program and In-Q-Tel, both of whose data sets have proven invaluable.

The SBIR program identifies three phases of development, with Phase I generally less than 6 months and $150,000, Phase II generally less than $1,000,000 and 2 years, and Phase III externally (non-SBIR) funded, providing a reasonable basis of comparison in our model. The SBIR program publishes good data on the percentage of proposals and the amounts awarded at each Phase, allowing for a robust analysis, although the Government Accountability Office (GAO) did find that the data are insufficient to determine DoD SBIR transition success from Phase II to Phase III. One additional twist is that the Navy instituted a program from 2000 to 2015 called the Transition Assistance Program (TAP) to provide mentoring support to these early stage researchers, and that data is also available in at least one study looking at the period from 2005 to 2008.

DARPA was a bit of a surprise. When the GAO tried to assess the transition performance of DARPA projects they concluded that “inconsistencies in how the agency defines and assesses its transition outcomes preclude GAO from reliably reporting on transition performance across DARPA’s portfolio of 150 programs that were successfully completed between fiscal years 2010 and 2014.” Another study puts DARPA success rate at about 3-5 products to market per year over 40 years, which the authors characterize as “quite impressive.” Measuring transition success is clearly not a priority.

In-Q-Tel data were a bit harder to come by, but here we were able to use two sources: their published number on downstream funding leverage, and a calculated number based on external data about the size of the In-Q-Tel portfolio and additional published fundiReference model fullng events. Thus we were able to calculate a performance number and compare it to the published number, again as a check on the validity of the model. All of these results are shown in the Figure. The In-Q-Tel (IQT) data show reasonable correlation between published and calculated numbers depending where IQT falls on the investment spectrum, and also shows that the best Government programs perform in line with the best of the commercial programs.

What about the rest? A couple things seem clear. First, there is much more emphasis on activity than on outcomes in the Government R&D space…how many programs are funded versus how many of those funded programs succeed in eventually deploying to users. Given the rapid rate of change in technology and the fact that our national strategic competitors are looking very hard for strategic advantage, it is certainly in the U.S. national interest to have a robust scientific community actively researching a large number of areas of interest. In this domain, activity rather than outcomes may in fact be the right metric. Some of the focus on activity is also driven by the Government budget cycle process, and certainly if outcomes are not reliably known for 4-7 years as in the commercial world, this is beyond the next election cycle for most elected officials.

But in that subset where transition to users is important, perhaps even a stated goal, the Government programs seem to struggle. The fact that GAO could not determine transition success rates for either SBIR Phase III or DARPA is one indicator. Plenty of literature speaks of the “Valley of Death” in the Government world, where inventions go to die before ever being deployed.

Among other issues, there are structural reasons for this. The “market” for Government transition is generally Programs of Record, those big, often billion-dollar programs. Those programs run on an entirely different set of principles than the Government R&D world, a set of principles where risk reduction rules the day and innovation may not even be welcome. So most Government R&D programs and national laboratories now have “technology transition” programs or offices, looking to commercialize all those great inventions that have been developed along the way, in some case with associated patents.

The standard model for these efforts has been to look at the outcomes of the early stage R&D process and license the intellectual property, or try and find an entrepreneur who will take it on, or encourage the inventor to become an entrepreneur. Two problems plague this approach: intellectual property transfers much more effectively via people than via paper; and, inventions created and prototyped without the market discipline of external investors and users determining value are almost always poorly optimized for the eventual market they hope to serve.

The programs that have done best at this are those that adopt the most “commercial-like” practices: listen to your customers and end users, get early feedback, understand the needs, understand the price points, worry about transition to market. When GAO looked at a set of DARPA case studies, they summarized it this way.DARPA Factors for Success

The good news is that the Smart Cities Actuator instills the commercial version of exactly this set of principles. While the Federal government can focus significant investment on technology development, it seems that the best Government programs are about as good as the best commercial programs. The difference is not the amount of money but the set of practices that make transition effective.

Next (Monday 3/20): How do we know the Actuator is Working? Part 3 – Corporate/University Programs


How Do We Know the Actuator is Working? Part 1 – Commercial Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

These days, when interviewing ventures for participation in our acceleration programs we typically get asked some variation on that question…what is your success rate? How do we know you will deliver what you promise? Like most reputable programs, CIT has a long track record of successful outcomes across a large portfolio of companies, and we are justifiably proud of that history. We will cite, as do others, some of the statistics on downstream funding and percentage of successful graduates and, along with a few anecdotes of the success stories, that usually answers the question.

But, while there are several listings of many of the accelerator and similar programs, it has been difficult to do direct comparisons of outcomes. For one thing, only a few programs such as TechStars and Y-Combinator publish fairly complete data sets; other data tends to be fragmentary. It is also true that different programs have different objectives, so no single metric is likely to be applicable across the board. Furthermore, how do you compare the outcomes of very early stage programs with those of, for example, A-round institutional investors where the success rates, returns and institutional imperatives are all different?

So, does the accelerator program you are in make a difference? The short answer is yes; the outcomes vary widely. The longer answer gets a bit wonky, so stick with me here. The results, both in this post and the next ones, point to some fairly significant findings as they relate to our national R&D enterprise.

We got the opportunity to study this in more detail during the DHS-funded EMERGE program, initially under a research grant from the USAF Academy. For this research grant part of the experimental design included standing up the EMERGE accelerator as a way to evaluate its effectiveness in technology innovation for the Government. The initial target market was defined as wearables for first responders. After a few months the outcomes looked quite positive, with the status charts showing lots of green indicators and very few yellow or red. Then the sponsors challenged us: those results look good, but can you prove the program is working, relative to all the other innovation activities we fund. Woo; tall order.

The key turned out to be the industry average model presented in the last post:


Would a comparison of individual portfolio performance against this yardstick provide a method for measuring success? [Spoiler alert] Apparently so. Of course any research study needs its caveats, and the big ones here are first the data were generally taken from publicly published data as of 2015, so results may have changed or may not be completely accurate; second, the analysis was at an aggregate portfolio level…a good graduate research project would be to re-do this type of analysis at an individual company level within portfolios; and third, the EMERGE data in particular were stated as being very preliminary data after only a few months of operation, whereas it generally takes 5-7 years for portfolio results to become reliable.

To start, we looked at internal programs. Using the above reference model, how did the early EMERGE data stack up against our own MACH37 cybersecurity accelerator, and against our GAP funds early stage investment program. The results were quite good (full methodology, references and data are documentedReference model CIT1 in the EMERGE 2015 Final Report). Since this original analysis the GAP fund results have continued to improve with additional later stage funding, the MACH37 results have remained steady as the portfolio continues to grow and mature, and the EMERGE results have declined somewhat but are still above the reference model benchmark results.

In fact these results were so good it looked suspicious. Could we explain how these different approaches got to similar outcome levels, or was there some flaw in the methodology? At a top level at least we were able to rationalize. MACH37 is widely known for the strength of its curriculum and the focused efforts of the partners to secure additional funding for the cohort companies. GAP is known for its highly effective due diligence and mentoring of early stage companies, with the resulting risk reduction making these companies more attractive investments for others. And because EMERGE did not make direct investments but was able to attract a lot of interest from early stage ventures, its cost per “shot on goal” was very low. Since each of these mechanisms is inherent in the model, the results at least passed the initial “sniff test”.

Next, to be sure the model could show some variations in results we added two more external accelerators where the data were available, TechStars and Y-Combinator, both well-known, highly successful, excellent programs. Surprisingly, both performed somewhat below the canonical model. One could hypothesize for TechStars that their global federation style of what are essentially franchises could produce enough geographic variability to explain those data. But Y-Combinator in particular was shocking since they are the godfather of the accelerator business and widely recognized as one of the most successful. What the heck?

Reference model CIT

A bit more research however told the tale. The published statistics for Y-Combinator at the time showed that 29% of their accelerator graduates received external funding, less than half of the industry average in the model. But, more than 7% of their accelerator graduates made it all the way to a Series B funding event, roughly twice the industry average, and with a Series B investment considered to be $40M or more, more than 4X the industry average. So, very high failure rate in the accelerator end of things, but off the charts probability of success for those who did continue on. In part this reflects their model of huge cohorts, more than 100 new companies at a time each receiving $120K initial investments…$12M investment per cohort! The accelerator essentially acts as a highly selective due diligence process, resulting in high quality deal flow for the Y-Combinator investors.

As Yael Hochberg puts it: “Y Combinator is cashing in on the name it made for itself…[t]hey’re talking about raising a multi-billion dollar late stage fund to take advantage of their model that selects great entrepreneurs rather than mold them.” [emphasis added]

This external validation finally convinced us that the methodology was fairly robust and could produce interesting results. The early stage venture program you are in does make a difference, and we continue to be very proud of the fact that CIT programs consistently perform very strongly in terms of outcomes on these types of objective measures.

The original research effort was funded to also look at the performance of Government research and development programs. How do they stack up? That is the topic of Part 2 of this post, along with some policy implications for the national R&D enterprise.

Next (Thursday 3/16): How do we know the Actuator is Working? Part 2 – Government Programs



Will the Smart Cities Actuator Make Me a Gazillionaire?

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

As they like to say in The Hunger Games, “may the odds be ever in your favor.” Meaning, that even though many people die along the way, and that your odds of hitting it big are likely smaller than you think, you never know, it’s the most exciting game in town, and what else were you planning to do with the next few years of your life anyway? Starting and running a business is sort of like that. What the actuator can do is substantially improve your odds of getting to a decent exit.

Here’s how it works. Firsfailure-ratet, the failure rate for new businesses is very high. Per the attached chart, almost 78% of all startups fail within the first 5 years…only 22% survive. The comparable number for accelerator graduates is that, after 5 years, 33% are still in business, an increased survival rate of 50% compared to the population of all new businesses.

Accelerator graduates also do maccel-funding-successuch better at fund raising. There are about 800 accelerator graduates on average per year nationally, out of a total of nearly 600,000 new businesses created per year, or roughly 1/10th of 1% of all new businesses. Yet according to Pitchbook, this small population of companies accounted for almost 33% of the Series A capital raises in 2015.

As part of my research into accelerator models I created a reference model, based on industry averages for funding at various stages of maturity, along with the likelihood that a company will move from one stage to the next.


For a new startup company entering a typical accelerator program (and receiving an average $100K investment to start), about 2/3 of those companies will go on to receive Angel round funding on the order of $250K. Half of those will raise a seed round of $1M, and so forth down the fund-raising line.

When you start stacking up those probabilities, here is what you get. For a single company coming out the end of the pipe with Series B funding, it takes three candidate companies who have raised a Series A round. Nine with a seed round, eighteen Angel round companies, and twenty-seven accelerator entrants needed to account for the successive attrition through the fund-raising rounds (very much in line with The Hunger Games odds, by the way!).

Now some companies don’t go through this entire process, and many of the outcomes prior to a Series B can be very nice outcomes for the entrepreneurs. But, the reality is that accelerator graduates do MUCH better than non-accelerator startup companies in terms of both survival and fund-raising, and yet even with these advantages the numbers show only about 4% of these companies go all the way to complete a Series B funding round.

One last consideration: the accelerator you’re in matters. Check out our next post for some more insight on metrics to help understand accelerator performance, and see the difference that a robust accelerator program can make.

In a perfect world, the odds will be ever in your favor, and perhaps the coveted Initial Public Offering (IPO) will occur. Home free at last! Gazillionaire, here we come!! But, life is never quite that easy. Check out this great story from the L.A. Times for some additional insights into the tricky internal company business of going through an IPO.

Next (Monday 3/13): How do we know the Actuator is Working?

What Are the Economics of an Actuator?

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

Running a successful Actuator, accelerator, incubator or similar early stage investment program requires finding the sweet spot where three sets of economics overlap: those of the early stage entrepreneurial ventures, the investors that support the ecosystem, and the actuator itself.

The economics for early stage entrepreneurial ventures is conceptually fairly gerbil-wheelstraightforward: when does the cash run out, and can I raise enough money before then to keep the company going? In the vernacular, this is the “runway”…how much runway do I have left. And the job of the early-stage CEO is almost always heavily tilted towards fundraising.

Two things make this more palatable. First, a successful accelerator will already include a number of investors in the ecosystem, and the program itself will help the entrepreneur better understand who to approach for funding, how and why. Second (the economic carrot in this plot) is the proverbial “exit.” At some point a successful early stage company starts selling enough product that somebody thinks the company has a great future, or the people are worth collaborating with, or the product is a good strategic fit.gerbil2 At that point they may buy the company, do an “acqui-hire,” or put in enough money that they bring in some new people to help run the company. Oh yes, and occasionally things can even appear so successful that they start selling stock to the public: the IPO.

Investor economics are also similarly straightforward: investors are alwaysreturns looking for good (or great) returns based on the amount of risk they take. Early stage investors make very risky investments, and so expect very good returns for their money. How risky? Well, the attached chart shows that roughly half of all venture investments lose money while only 5% generate about 1/3 of the total returns; not necessarily where people want to put the bulk of their retirement funds, for example.

On the West Coast this has in part led to the great Unicorn hunt, with big money trying to find or create the 1% of that 5% that turns into something like Facebook or the Snap IPO. But it does take big money; if 0.05% of your investments turn into unicorns, $1M investment per attempt takes $2B of investment capital. Things are a little more conservative on the East Coast, and the above chart also shows that half the returns come from investments that yield 2X to 5X the invested capital. And early seed round investments are more like $50K or $100K instead of $1M, meaning that this scales to be a feasible investment strategy for people with high risk tolerance but not quite as much investable capital as Silicon Valley.

For individual investors in this category finding a stream of suitable investments and assessing the risk of each one is a daunting challenge. However the Actuator plays an important role here as well, since a significant part of Actuator activity revolves around creating and evaluating a flow of interesting companies, and reducing the risk of those companies surviving and reaching market. This intermediary role for an accelerator in both reducing risk and matching investors and entrepreneurs is one of the characteristics that can make them so effective.

What about the Actuator economics? To understand that, these entities are best viewed as startup ventures themselves. The ones we have worked with around the country generally have operating costs in the range of $1M – $5M per year, accounting for salaries, facility and other associated operational costs. Where does that money come from? For many it comes from grants, or community or University funding, legislative appropriations or sponsorships, and these sources are critical for establishing and maintaining a program in the early stages.

But is there at least a conceptual model that would minimize or eliminate the need for these external funding sources over time? Yes, and it is very analogous to a startup gaining enough product traction to be self-sustaining on the basis of revenue generated from product sales. The “product” that accelerators “sell” is investment opportunity in early stage ventures. The quality of that product is directly tied to the quality of the incoming ventures as well as the ability of the accelerator to reduce the risk of those investments through mentoring and interaction with a robust ecosystem.accelerator-economics.png

One self-sustaining economic model uses equity investment returns of the accelerator to pay for operating costs, as shown in the figure. It takes 4-7 years or more to realize the value of these investments; the model assumes 5 years. It also assumes $2M/yr operating costs, a dozen investments of $50K per year ($600K total), and a distribution of outcomes consistent with those shown above. So after 5 years, 1/3 (four) of these companies are likely to have failed, we assume four generate 2X ($100K) returns, three generate 5X ($250K) returns, and one generates a 40X ($2M) return. Note that if your initial $50K investment was in exchange for 8% equity and no dilution occurs in between, the company value would need to be $25M for your piece to be worth $2M. As the economic model chart shows, under these somewhat optimistic assumptions (and assuming you can repeat this success year after year) the model can become self-sustaining.

While this is a challenging model, it is also helped by the fact that there are a number of secondary benefits that entice external groups to help defray some of the costs. For later stage investment groups this private source of vetted deal flow is attractive. For Universities the ties to entrepreneurship curricula make it a reasonable extension of those efforts. Opportunities for economic growth often entice legislatures, and strategic Corporate partners may see sponsorship an an inexpensive way to find strategically relevant innovative technologies.

Creating and sustaining a successful accelerator-type program requires the ability to thread the needle in a way that meets the economic imperatives of three major stakeholder groups: the entrepreneurs, the investors, and the accelerator itself. No wonder that many of these programs do not survive when the initial funding runs out. CIT has been fortunate to have both a successful investment experience with early stage ventures over many years, and much better than average outcomes in running accelerator programs directly. This proven success plus great partners, good timing, and a great environment for building business ecosystems is what will help ensure the success of our new Smart City Works Actuator.

Next (Thursday 3/9): Will the Smart Cities Actuator Make Me a Gazillionaire?