How Do We Know the Actuator is Working? Part 1 – Commercial Programs

Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.

These days, when interviewing ventures for participation in our acceleration programs we typically get asked some variation on that question…what is your success rate? How do we know you will deliver what you promise? Like most reputable programs, CIT has a long track record of successful outcomes across a large portfolio of companies, and we are justifiably proud of that history. We will cite, as do others, some of the statistics on downstream funding and percentage of successful graduates and, along with a few anecdotes of the success stories, that usually answers the question.

But, while there are several listings of many of the accelerator and similar programs, it has been difficult to do direct comparisons of outcomes. For one thing, only a few programs such as TechStars and Y-Combinator publish fairly complete data sets; other data tends to be fragmentary. It is also true that different programs have different objectives, so no single metric is likely to be applicable across the board. Furthermore, how do you compare the outcomes of very early stage programs with those of, for example, A-round institutional investors where the success rates, returns and institutional imperatives are all different?

So, does the accelerator program you are in make a difference? The short answer is yes; the outcomes vary widely. The longer answer gets a bit wonky, so stick with me here. The results, both in this post and the next ones, point to some fairly significant findings as they relate to our national R&D enterprise.

We got the opportunity to study this in more detail during the DHS-funded EMERGE program, initially under a research grant from the USAF Academy. For this research grant part of the experimental design included standing up the EMERGE accelerator as a way to evaluate its effectiveness in technology innovation for the Government. The initial target market was defined as wearables for first responders. After a few months the outcomes looked quite positive, with the status charts showing lots of green indicators and very few yellow or red. Then the sponsors challenged us: those results look good, but can you prove the program is working, relative to all the other innovation activities we fund. Woo; tall order.

The key turned out to be the industry average model presented in the last post:

slide3

Would a comparison of individual portfolio performance against this yardstick provide a method for measuring success? [Spoiler alert] Apparently so. Of course any research study needs its caveats, and the big ones here are first the data were generally taken from publicly published data as of 2015, so results may have changed or may not be completely accurate; second, the analysis was at an aggregate portfolio level…a good graduate research project would be to re-do this type of analysis at an individual company level within portfolios; and third, the EMERGE data in particular were stated as being very preliminary data after only a few months of operation, whereas it generally takes 5-7 years for portfolio results to become reliable.

To start, we looked at internal programs. Using the above reference model, how did the early EMERGE data stack up against our own MACH37 cybersecurity accelerator, and against our GAP funds early stage investment program. The results were quite good (full methodology, references and data are documentedReference model CIT1 in the EMERGE 2015 Final Report). Since this original analysis the GAP fund results have continued to improve with additional later stage funding, the MACH37 results have remained steady as the portfolio continues to grow and mature, and the EMERGE results have declined somewhat but are still above the reference model benchmark results.

In fact these results were so good it looked suspicious. Could we explain how these different approaches got to similar outcome levels, or was there some flaw in the methodology? At a top level at least we were able to rationalize. MACH37 is widely known for the strength of its curriculum and the focused efforts of the partners to secure additional funding for the cohort companies. GAP is known for its highly effective due diligence and mentoring of early stage companies, with the resulting risk reduction making these companies more attractive investments for others. And because EMERGE did not make direct investments but was able to attract a lot of interest from early stage ventures, its cost per “shot on goal” was very low. Since each of these mechanisms is inherent in the model, the results at least passed the initial “sniff test”.

Next, to be sure the model could show some variations in results we added two more external accelerators where the data were available, TechStars and Y-Combinator, both well-known, highly successful, excellent programs. Surprisingly, both performed somewhat below the canonical model. One could hypothesize for TechStars that their global federation style of what are essentially franchises could produce enough geographic variability to explain those data. But Y-Combinator in particular was shocking since they are the godfather of the accelerator business and widely recognized as one of the most successful. What the heck?

Reference model CIT

A bit more research however told the tale. The published statistics for Y-Combinator at the time showed that 29% of their accelerator graduates received external funding, less than half of the industry average in the model. But, more than 7% of their accelerator graduates made it all the way to a Series B funding event, roughly twice the industry average, and with a Series B investment considered to be $40M or more, more than 4X the industry average. So, very high failure rate in the accelerator end of things, but off the charts probability of success for those who did continue on. In part this reflects their model of huge cohorts, more than 100 new companies at a time each receiving $120K initial investments…$12M investment per cohort! The accelerator essentially acts as a highly selective due diligence process, resulting in high quality deal flow for the Y-Combinator investors.

As Yael Hochberg puts it: “Y Combinator is cashing in on the name it made for itself…[t]hey’re talking about raising a multi-billion dollar late stage fund to take advantage of their model that selects great entrepreneurs rather than mold them.” [emphasis added]

This external validation finally convinced us that the methodology was fairly robust and could produce interesting results. The early stage venture program you are in does make a difference, and we continue to be very proud of the fact that CIT programs consistently perform very strongly in terms of outcomes on these types of objective measures.

The original research effort was funded to also look at the performance of Government research and development programs. How do they stack up? That is the topic of Part 2 of this post, along with some policy implications for the national R&D enterprise.

Next (Thursday 3/16): How do we know the Actuator is Working? Part 2 – Government Programs

 

 

Leave a comment