Follow us @CITOrg or @dihrie or this blog for current information on the new Smart City Actuator.
Parts 1 and 2 of this series looked at methods for measuring the success of commercial and government programs for accelerating innovation. The other major sources of innovation in the U.S. economy are Corporate R&D efforts and University programs. How effective are these sources for driving innovation? It is hard to generalize, given the very large number of both Corporate efforts and University programs and the paucity of data, although the anecdotal data is mixed to negative.
It does seem as though our model of funding leverage and transition probabilities could be adapted to measure success here as well. For this analysis however, alternate data sources provide some quantitative clues that support the anecdotal data. On the Corporate side there are certainly large company innovation success stories. Apple for example has certainly had success with their early mass market personal computer, and more recently with their mobile devices (iPhone, iPad). One of the more amazing companies in this regard has to be General Electric who, over the last 50 years (based on where their profits originate), has gone from a heavy equipment manufacturer to a financial services company to the current big data company that also happens to be one of the world’s leading manufacturers…and always as a market leader.
But the anecdotal and other data tell a different story. Many accelerator organizations around the country conduct essentially consulting engagements with large Corporations to help them engage the innovation ecosystem. Data show that 86% of the original 1955 Fortune 500 companies no longer exist, and that 50% of the Fortune 500 in the year 2000 have also failed. The average life expectancy of all companies is only 12.5 years, and the failure rate seems to be increasing. According to Forbes, the reason “Why U.S. Firms Are Dying: Failure to Innovate”
One solution might be for Corporations to invest more in R&D. Alas. <re/code> reports on a clever study from Bernstein Research analyst Toni Sacconaghi that looked at “historical R&D spending figures as a percentage of sales for publicly traded tech companies with market caps of $2 billion”, then tracked their stock performance 5 years later when presumably the research might have paid off. The chart summarizes his data, and concludes that there is “no meaningful relationship between R&D spending and stock performance”. I had a hard time deciphering the chart, so decided to put it in a more graphical form, and insert a few trend lines. The results were actually worse than Sacconaghi was willing to conclude. Not only is the regression line negative…the more you spend the worse your results…the standard deviation decreases with spending, meaning that the likelihood of a poor result is much higher the more you spend. Looking at this chart as a non-financial person, it is not clear to me why anybody would invest in companies spending more than 20% of sales on R&D. As a side note, something really interesting seems to be happening around a magical number of 7-8% R&D spending, but I haven’t a clue what that might be…ideas welcome!
How about Universities? Many of them have Tech Transfer offices for their research-derived intellectual property (IP), and again there are certainly success stories in MIT/Harvard, Carnegie-Mellon, Stanford and others. Anecdotally however these offices, many based on a traditional licensing model, have overall not been incredibly successful. Two reasons are often given: first, IP transfers much more effectively via people than via paper so licensing models without researcher support are not effective; and, second, some Tech Transfer offices like to value the IP on the basis of research dollars spent, not on market value. As one of my colleagues put it “when I have these discussions the Tech Transfer Office will often tell me the value of the IP is $10M or something; I tell them it’s worth zero unless somebody takes it to market, and we negotiate from there.” Fortunately some of the more forward-looking Universities are starting to migrate towards a shared risk/reward model where the value of IP is set by marketplace outcomes.
Is there an explanation for this phenomenon? One possible answer lies in where the funding for research comes from, and where it goes. Again an obscure chart, this one from NSF. After staring at this one for a while what I really wanted to know was net funding flows. For example in Basic Research the Federal Government funds about 60% and only conducts about 10%…where does the other 50% go? Mostly to Universities, which fund 10% and use more than 50% of the Basic Research dollars. OK, easy enough to turn into a graph, where the nodes are the sources, scaled to size by percentage of funds sourced, and the graph edges are the net flows, also scaled to size. By applying some awesome powerpoint skills and a garish color scheme, the Basic Research picture looks like this. Half of all research dollars go from the Federal Government to University-based research, with a small additional amount funded by Industry, totaling about 2/3 of all research performed.
Now applying the same analysis to the Development portions of the NSF chart yields the following. Here, almost ninety percent of all Product Development activity is funded and performed by Industry, with some support from the Federal Government, while Universities are almost invisible. No wonder there is a bit of a disconnect; Universities apparently are not at all focused on commercializing their research, if the funding numbers are to be believed. One last chart provides the summary. Looking at the per capita number of startups based on University-developed IP the numbers have been dropping for a while. More to the point, the numbers are low. For Virginia, for example, these numbers equate to about 20 startups per year. Our cybersecurity accelerator, MACH37, evaluates more than 100 companies per year to select a dozen participants just in the area of cybersecurity. Numbers are similar for most venture investors, with only single digit percentages of the number of deals reviewed actually resulting in investment. Thus for Virginia this may equate to one or two investable companies per year, based on University-generated IP. To be fair, this probably underestimates the amount of entrepreneurial activity generated by University programs and proximity, but is probably reasonably accurate in terms of the Institutional success, based at least on the anecdotal evidence.
It is clear now, having looked at innovation and technology transfer across commercial accelerators, the Federal Government, Corporations and Universities that successful innovation is one of those “wicked problems.” While there are successes across each of these domains, there are no magic bullets, no guaranteed approaches for innovation. So, how do we know our Smart City Actuator is working? And, are there ways to make this entire national research enterprise more efficient? We will explore those questions in Part 4 of this series.
Next (Thursday 3/23): How do we know the Actuator is Working? Part 4 – Synthesis and Policy