When Steve Blank wrote the book “The Four Steps to the Epiphany”, he brought a sea change in the way technology entrepreneurs do business. Rather than plow ahead with a technology-led process, most entrepreneurs have embraced “customer development”. Getting out of the building and doing primary market research has saved a great many startups from solving the wrong problems, and repeating the mistakes of projects like Google Glass.
You don’t know what you don’t know
When you are just getting started, the first thing to do is to admit that you don’t know what you don’t know. You have hypotheses about your target market and end users, but you don’t know if your intuition is right.
What you need to do at this stage is “problem research”. This is the phase in primary market research where you try to understand the problem. The technique that is central to the Lean Startup movement, MVP (minimum viable product) testing, is all about “solution research”, which I will cover in a separate post.
Understanding which papers attract critical citations, and what effect they have, gives an insight into how science progresses, says Christian Catalini.
Science advances through researchers sharing their work for others to extend or improve. As Isaac Newton once said, he could see further by “standing on the shoulders of giants”.
But what happens when those shoulders aren’t as sturdy as we thought? Sometimes, citations are negative, pointing out a study’s flaws or even disproving its findings. What role, relevance and impact do these negative citations have on a field as a whole?
There has been little research in this area, because of the difficulty in identifying and classifying such citations. Thanks to advancements in the ability of computers to understand human language, known as natural-language processing, and in the ability to sort and analyse large bodies of text, this is changing. We can now identify such citations and reconstruct the context in which they were made to understand the author’s intentions better. Using such techniques, my colleagues and I have found evidence to suggest that negative citations play an important role in the advancement of science.
The government should be smart and strategic about the type of spending it will do, says David Schmittlein, MIT Sloan School of Management dean, who says if the government spends on innovative enterprise in America, it can put those dollars to better use.
What type of corporate culture is best for innovation? How ought firms and managers encourage their workers to be more creative? And if those workers fail in the pursuit of creativity, is that necessarily a bad thing?
These are the questions we wanted to answer in our latest paper.* We used life sciences as the backdrop of our research comparing similarly accomplished scientists who received either financial support from the Howard Hughes Medical Institute (HHMI), the large non-profit biomedical research organization, or federal funding from the National Institutes of Health (NIH). The HHMI money lasts five years and is often renewed (at least once); the program “urges its researchers to take risks … even if it means uncertainty or the chance of failure.” The NIH grants, on the other hand, last three to five years, have more specific aims, and their renewal is far from an assured thing.
MIT Sloan Assoc. Prof. Gustavo Manso
Among other things, we looked at how often these scientists published articles that were among the top 5 percent or top 1 percent of the most cited papers in their fields. We found that the HHMI-funded scientists produced twice as many papers in the top 5 percent in terms of citations, and three times as many in the top 1 percent, relative to a control group of similarly accomplished scientists funded by the NIH. But they also were more prone to underperform relative to their own previous citation accomplishments. The take-away lesson is clear: biologists whose funding encourages them to take risks and tolerates initial research failures produce breakthrough ideas at a much higher rate than peers whose funding is dependent upon meeting closely defined, short-term research targets. But there is a cost associated with these long-term incentives, since they also lead to more frequent “duds.”