Data-based decision-making at corporate RDI projects – is it just a mirage or the future?

Written by: and
Posted on:Dec 5,2020

Abstract: The problems with the peer review process are well documented, still it remains the mainstream method to evaluate and decide on publicly funded corporate RDI projects. Actually, reviewers make suggestions for decision makers of the RDI funding schemes based on the novelty, innovativeness of the project proposal which has not much to say about growth capabilities of the organization. It is also well recognized that in policy documents the digitalization appears to be an appropriate technology which can complement the existing decision-making process in this field. This study proposes a new framework integrating existing digital tools which can complement and support the peer review process of corporate RDI projects in the future.

Keywords: peer review, corporate RDI, big data, semantic similarity, social media listening

JEL Classification: D7, D83, O3,

Introduction

Despite the numerous critiques on the peer review process, the main argument for it is that there is no better methodology to fund corporate innovative projects. As the digitalization appears on the horizon, it will challenge the statement above. This article is about the digital tools and datasets that can back and in some regards replace the peer review process.

We collected what science already know about this topic. We outline how to build up a data-based decision-backing system. We also draw the contours of such a system as appears to me today.

The databased decision making in the public sector is a very urgent and timely question. As the public sector uses public money which is highly regulated, it can’t be a frequent early user. As we will present, today there is enough knowledge and experience in using digital technologies, so the need of using them arises in the policy documents. But so far nobody created the framework of data-based decision making in this sector.

Theoretical background

The innovation has been deeply disarranging the world economy (Vasa, 2010; Romano-Trau, 2017, Vasvári et al., 2020). Even though a country lags behind in innovation, it can’t stop its negative effects, it only can’t enjoy the positive ones. That’s why countries spend more and more money on innovation. But is it possible to spend the money more effectively, or is it effective enough how we spend the funds for innovation.

There are three big ways of reforming innovation expenditure in a way that are thought to be more effective:

– giving refundable fund instead of the non-refundables,

– changing the funding schemes, and the eligibility criteria,

– changing the decision-making process.

With the refundable sources you reach more mature companies and innovation ideas, than with the non-refundable ones. Both have their own role in the innovation landscape. This is a bypass, a try to get around the problem instead of penetrating into and reorganizing it.

Changing the funding schemes usually lacks any evidences it is based on experts’ views. A funding cycle is too long to get reliable data from previous programs. From experiencing a problem to have it in a policy document, it takes 1-2 years. After that from appearing the call to signing a funding agreement it takes also about a year. The winners are working on the projects 2-4 years. After that can we monitor the first data like income or efficiency. It takes another 2-3 years to make a rough decision whether the project was successful or not. After all these stages can we summarize the program and evaluate it. It takes also months. So, the funding cycle is at least 5 year, but usually it is much longer. Changes in the funding schemes can’t wait for so long.

The third one is about the decision-making process. Today we make decisions on non-refundable innovation sources that target companies, with the peer review process. But there is no evidence at all, that peer review methodology can predict economic success at a company. It is used because:

– it is widespread at evaluating research, and innovation is somehow similar

– we don’t have a better tool right now.

Johnson (2008) shows cases at the NIH where the reviewers gave different score to proposals however they ranked them similarly. But as not all proposals were scored by all reviewers the final ranking was dependent of what a reviewer thought about each grade. This is a wide known distortion. Johnson showed that this distortion can be 25% of the funded projects.

Funding agencies have to evaluate sometimes thousands of projects within a limited amount of time and money. During the evaluation of a corporate project proposal many different types of knowledge are needed: academic, technological, corporate, market knowledge, economic, etc. (Van der Meulen, 2010; Vasa et al., 2012) It is very hard to find evaluators in big numbers who has all of these skills. If there is a bureaucracy to evaluate projects then it will evaluate the projects with as excellent experts as are available at that moment. They are not necessarily good enough or their evaluation is not necessarily deep enough.

We have tools for measuring scientific excellence but business excellence is much harder to measure; it is both hard to conceptualize and to measure it.

– it is hard to compare a previous success with the project on the table

– it is hard to distinguish between lack of business excellence and bad luck or unforeseeable things

– it is hard to reliably measure an innovation success among other activities of the company

So, instead of trying to measure business excellence, funding agencies use peer review as a proven, widely accepted process.

The peer review process is based on four-eyes principle. Because so wide expertise is needed, during the evaluation of a corporate innovation project the four-eyes principle is really hard to meet.

While in the academic sector it is pretty common to get involved in the evaluation for free or at least for a very low price, in the corporate sector nothing is for free. Good expertise has a high cost, and funding agencies usually don’t have this huge amount of money.

Misunderstanding the grades, problems with availability of the proper expertise, conceptualization problems of the notion of business excellence, violation of four-eye principle, lack of proper amount of money: these are the basic problems while using peer review as the core decision-making process at the corporate innovation projects. So, it is worth to think on how to reform the whole decision-making process.

Digitalization is developing so fast that we have to consider whether it can have an influence on decision-making or not. The Daejeon Declaration OECD (2015., p. 4.) states that “the difficulty and cost of monitoring many types of activity has declined. As a consequence, the implementation of policies as well as the behaviors of recipients of public funds can be monitored more closely.” “… science, technology and innovation are being revolutionized by the rapid evolution of digital technologies which are changing the way scientists work, collaborate and publish…” “Science and innovation governance and policies are themselves also being affected.”

Our research objective is to collect the tools that can back or complement the peer review process and to suit them into one framework.

Discussion

Innovation is a knowledge-based activity which aims to develop a sellable, marketable product or service. Here we don’t deal with innovation of companies that happens through reception or adaptation.

Project funding is defined as money attributed to a company or an individual to perform an innovation activity that is limited in scope, budget and time, and its goal is to make profit for his company. It is identified based on three main characteristics: a) funds are attributed directly to the company or a person in the company; b) the scope and duration of the innovative project supported are limited; c) funds are attributed by a research funding organization (RFO), funding agency external to the organizations of the end company or person (Vasa-Sitenko, 2011; Vanino, 2019). RFO or funding agency is a state-run agency, office, or department of a ministry. Its goal is to distribute public money among innovators in order to foster their activities.

A successful project can be defined from four aspects:

– is it able to gain the targeted fund?

– is it able to give an account to these funds, is it able to close the funding contract?

– is it able to do all the planned activities and as a resolution there is all the necessary information?

– is it able to get profit from the product or service?

Most funding agencies examine the second and partially the third aspects. From our point of view the success criteria is broader.

Those countries will achieve a long-term growth which invests in human capital, knowledge or innovation. These resources have a spillover effect and they spread through out of the entire economy and contribute to the growth (Romer, 1990). Jones (1995, p. 761) criticizes Romer because his model implies the “scale effect”. This means that an increase in resources of R&D (money, personnel, etc.) should increase the level of growth. This is not true, however several countries look like behaving so.

What policy makers expect from R&I investment is to have impact on GDP growth (Solow, 1957; Shibata et.al., 2008; Sitenko et al., 2018). Public R&I policies are not directly attached to GDP growth expectations, however these expectations are lying behind these policies. They are justified by market failures, positive spillovers and negative externalities. (EC 2017/2) “The standard market failure rationale for business R&D support is that firms tend to underinvest in R&D on account of its costs and uncertainty, the time required to obtain returns on investment, and the possibility that competitors can capture knowledge spillovers” (OECD, 2016). Vanino (2019) summarizes the rational for public support to company’s R&D activities. The main reason is that it has an effect on knowledge and value creation. He identified four mechanisms for that:

public support will increase liquidity and financial slack

through cost-sharing, it reduces the required investment and de-risks private investment

where there is a market failure innovation may have market-making objectives to address particular social or economic challenges

public R&D and innovation support can play an enabling or bridging role, helping firms to access otherwise unavailable new or preexisting knowledge.

GDP growth is a positive spillover from this point of view. The Lamy-report (EC 2017/2) says that EU is not good at commercialization of research while another EU document states (EC, 2017) that every euro spent generates 10-30% benefit (see also: Hather, 2010). Here arises the problem of impact assessment. Bornmann (2013 p. 219) collects the problem of impact assessment, and he concludes that it is impossible to measure correctly the impact of a research, thus it is impossible to show it to the policy makers.

Following the previous economic crisis (2008), many governments reinvented the targeted industrial policy. Concerns about the loss of manufacturing capacities, growing competition from emerging economies, and a sciences and technology-driven new production revolution have contributed to a surge in interest. (OECD, 2016) Concerns about losing something important or falling behind the competitors are rationale in RDI investments as well. This argument relates RDI funding to economic growth again.

Jaruzelski, et.al. (2005) researched the thousand publicly owned companies that spent the most for RDI. They found no connection between the high RDI expenditure and the value of company or income or any success indicator. You can’t buy results for money. But exactly this is what funding agencies are trying: they try to convert state fund to RDI results and profit. But this process is not that simple.

So, we see, that the governments’ main reason to support RDI is that they hope to have some economic impact. If a policy maker aims to increase economic growth through research and innovation, then just spending more money is not enough. You need a technology to be able to better distribute the available funds among the recipients.

Research on funding mechanisms

At the beginning of this work we supposed that project evaluation is a fine-tuned tool in the service of economic growth. But the first researches which tried to develop new indicators on pre-existing databases are no older than a decade. Lepori (et al., 2007) showed that these indicators could provide useful results for comparative analysis of public research policies.

As Reale (2017, p. 10) states “there is wide variation in European countries both in terms of relative importance of public funding and the mechanisms and criteria applied for its allocation” and “national policies and programs are being developed without any obvious alignment with the parallel situation in other countries”.

Reale (2017, p. 12.) also states that basic levels of international comparability of datasets is an attainable objective. A major difficulty in investigating the different national R&D funding systems is the insufficient availability of quantitative and qualitative data on the instruments and actors involved in their management. We aren’t the first ones who tries to point out that project evaluation is falling behind the technology (Arnold, 2004, Tóth-Haász et. al., 2019).

Wang et al. (2017) draws attention to the fact that there is limited theory available which predicts the effects which should arise from public R&D intervention on the performance of firms. It is clear, reading her analysis that in the age of big data nobody has ever drown a framework of how to use big data on public RDI funding. However big data and digitalization is thought to be the technology which will revolutionized the distribution of funds. There is an information asymmetry between the company and the funding agency (De Fraja, 2016), but nobody tied to overcome it using big data. All the studies tried to polish indicators and tried to analyze them. In the age of big data, single indicators are not so reliable than before, but there are so many of them that altogether they are much more reliable than the databases existing right now.

The principles of building up databases

It sounds pretty reasonable to build databases and draw conclusion on them for the future. Why isn’t everybody doing so? To build up a proper database takes years or decades. You act now for something that will be fruitful in 10-15 years. You need for that:

– expertise: things are changing so fast in the technology, that you need an expertise to see far enough to be able to build up a vision. This expertise should be at least on the field of:

technological (IT)

legal (GDPR)

scientific (how to overcome methodological issues)

– vision: many people will react angry that it is not possible what you are talking about, and you need a vision to convince them

– knowledge: to understand the data-based society that we are building every day

– resource: you have to ask a big bunch of money to build a system that is controversial today, but it might be fruitful in 10-15 years

We define as data all kind of information that a project owner provides, or electronically is available in connection with the project, and during the evaluation process is automatically collectable and processable.

It is limited or at least not tested whether you can use database from abroad. The culture of earning and using funds are so different in the countries that a similarly-built dataset may lead to invalid data. Every country should build up its own database.

We should think about three things:

• Do we have methods to collect a certain data?

• Is this method reliable in terms of being repeatable?

• Do we understand the data and its contribution do success?

Data, that are collected today

A successful R&D project must be successful on three fields (Ries, 2011)

• technology

• market reach

• management

Funding agencies collect data on the project, on the company and on the persons that are in the company. On the project we have data on scientific excellence. It includes all evaluations that are made by an expert. Thus, it includes to this extent the business excellence.

On the persons we have the CV-s (Gaughan-Bozeman, 2002) and the companies, projects they were involved in. From the company register we can collect all the companies that a certain person was owner or general manager however it is really hard to estimate his role either in success or failure. We can also ask about the projects that he was involved in, but it can be based on self-declaration which has also a distortion effect. At some agencies (e.g. Irish Science Fund) they ask for referees, and they really call them to get information, but it is more an exception than a trend.

On companies we have all the data that are in the company registry. A fourth category is the dataset of previously funded projects. We can compare new projects with them and draw statistical conclusions on them. These conclusions can appear in the scoring system. So, if we find that a team member with PhD degree increases the chances of overall success with a certain percent, we can give the same scores for the future projects.

This is the dataset that should be complemented with big data. In the next chapters we present three existing methods that are based on big data and can contribute to decision making on funding corporate innovative projects.

Technology rating systems and previous projects

Is it possible to find certain indicators that can be signs of success, or can contribute to it to a definable percent? From a funding agency’s viewpoint can a scoring system of a call be evidence based?

Sohn and Kim (2012) provide a good overview on the trials to find the key success factors for an innovation to reach the market and survive afterwards. They created a technology rating system from their study.

Technology rating systems that are used for technology credit guarantee in Korea. They are composed from the following indicators: the ability of the CEO, the level of technology, marketability of technology and potential or realistic profitability (Sohn and Kim, 2012, p. 4008). Some of the attributes are measured on a Likert-scale by a committee of evaluators. In these cases, the methodology must have the same problems that our peer review system. However, finding objective indicators that in the past proved to be good forecast of success can be an alternative or it can complement the peer review system. These objective indicators can be evaluated by a logistic regression analysis. (Chen-Huang, 2003)

Based on the data, the projects are classified into five risk categories. In all categories they can forecast the percentage of the failed projects. So they give funds to the most risky projects as well, but they are able to count how much of the credit guaranteed by them will not be repaid. Thus, they are able to control their budget. On the long run the trend shows us, that the projects selected with the data-based decision-making system were always within the predefined success rate.

The Republic of Korea is using this system for more than 15 years. The first trial to copy this system is a Horizon 2020 project aiming to create a technology rating system on similar principles than the Korean one. This is the InnoRate Technology Rating Platform, a data-driven tool for supporting and improving the decision-making processes of investors for financing innovative SMEs. The project has begun in 2019 and last until 2021. It intends to adapt the Korean methodology to the EU reality. It is targeting the investors and lenders, and not the public sector yet.

Leveraging the wealth of existing data, semantic technology and in-depth human expertise, the InnoRate Technology Rating Platform aims to:

• Minimize the time and resources (human and financial) required by investors and lenders for assessing innovative SME cases;

• Make the prospects of innovative SMEs clear to investors;

• Reduce knowledge and information asymmetries and risk premiums paid by innovative project managers.

This is the first case that the decision is based on data and software. We see that it makes the funding system calculable, reliable.

Pull innovation and market need

Tidd (et.al., 1997) introduced the notion of push and pull innovation. Push innovation is when there is a technology and it is transformed into a product. Pull innovation is when creating the proper technology is a reaction to the market need.

“The fast growth of a company is related to successful entrepreneurial culture.” (Davidson-Henrekson, 2002) That’s why the startup industry found out standardized methods to build up companies (e.g.: Blank-Dorf, 2012). Can it be translated into datasets?

Hartigh (2018) conceptualized innovation system of a company as a system instead of a process. “Components of a company innovation system are actors or resources.” “We can identify the components, such as R&D departments, labs, venture organizations, teams, employees, C-level offices and facilitating tools.” What is important for us is, that a system, the elements of a system and the connections between these elements can be counted and measured digitally.

There is no one and single way to the market, but there is a labyrinth with many entrances and exits and most of the exits don’t lead to the market. There are tools, and the management should build his own road to the market, combining these tools. So, if we are able to collect data while the beneficiary uses them, then we can follow him on the way to the market. If the beneficiary isn’t using any of the tools, that is raising questions on how up to date his management knowledge is.

Guan (et al., 2016, p. 771) distinguishes two types of indicators that measure R&D: result oriented performance or process oriented (efficiency) performance. Karlsson – Andersson (2009) distinguish three types of indicators: investment, performance and results. A project has tree important features: goal, timing, resources. These are all measurable and comparable indicators.

The main risk of innovation is not technical but entrepreneurial. Good professionals are able to predict whether a project is feasible technologically. Marketization is a lot bigger challenge. Is it possible to do market research before an R&D project? From a funding agency’s viewpoint can we reduce the commercialization risk before the funding decision? Is it possible to collect big data while doing so? Can we avoid funding those R&D projects that for many reasons have no way to the market?

There is a lack of literature which examines what impact a professional project management has on the success of an RDI project. Or in other words we don’t know how many projects fail because of the inadequate goals and how many of them fail because of poor project management.

Commercialization is an actor’s action to transfer knowledge asset to another independent actor, and gain monetary resources. (Hemert, et.al., 2012; Lichtenthaler, 2005) Danneels (2002) conclude that successful marketization depends on the firms’ ability to delink competencies from existing product-market combination and relink them to new niches. Schenk and Guittard (2011) analyses the user and market feedback as a source for innovation.

Lean management (Ries, 2011) says that everything that a company knows about the market is just a hypothesis and should be thoroughly tested. A project is full of goals that lie on such hypothesis. Who is a potential customer, why is he a customer, what product features does he like, how can the company reach them, what does it cost, these are all questions that are stated in the project description. The evaluator examines them thoroughly but still it is a debate on thoughts, not on facts. They remain hypothesis until they got tested. Hypothesis testing is data collection. Project owner should be able to reach out to the potential market and test what he believes in.

Testing has two phases (Blank-Dorf, 2012). First, we should find out whether the problem we want to solve exists on the market (Ellis-Brown, 2017). There are a group of people who feel that the problem exists, it bothers them, it is obvious for them that it is their problem, and they are willing to pay to get rid of the problem. Second, the answer that we give is an appropriate answer.

The Launchpad Central is the software version of Blank’s method. It has a panel for supporting governmental agencies. The NIH and the I-Corps program are partners of this software, which shows that there is a governmental need to help market hypothesis testing and to collect more data on that process. The agencies today use it for educational purposes. But on the longer run a similar app might be part of the data collection, decision making or monitoring process.

A tool of marketization is the user experience (UX) (Tullis-Albert, 2013). UX has three main characteristics: (1) user is involved; (2) user is interacting with a product, a machine, a system, or anything with an interface; (3) the experience of that user is of interest. The commercialization process includes all three. It is hard to find market for a product that is too complicated for an average user. The UX mentality is not restricted to software, we can use it to all products. UX is measurable. You can measure the performance, the usability, the self-reported experiences and you can use behavioral or psychological metrics. These data are also collectible.

How these tools create a database that predict success on market? We don’t know yet, but it is the time to begin to collect data on the funded projects and after a while to find patterns in them. The goal can be the continuous digital supervision or listening of innovation activities.

A digitally listened company in this regard means that all the necessary information is shared with a funding agency, and the company’s relevant activities are digitally listened by apps. Does it have any chance that the companies will be partner in such a funding system? It is easy to collect cons, why it will never happen. Confidentiality and corporate culture in this regard are the two main cons.

But there can be segments of companies where this culture might spread. One group of companies who might gain with such a system is the companies who continuously use public money for their development. If they want to build the usage of public money in their business model then the funding agencies are in position to ask for more openness. The second group of companies are the newcomers, the startups, where there is only one basic activity, and there is no danger that they have to share too much information on the other parts of the company. The typical company which will be the hardest to convince is an established company with one or rare innovative projects.

It is important, that a digitally listened company would share data on commercialization activity and not on the technology development. Confidentiality in research funding institutions is a basic notion of operation, this would be just an extension of datasets, nothing basically new. Today all the confidential information is shared with peers, who are people. Of course, there mustn’t be any conflict of interest, but still, nobody can guarantee that a peer will not learn, will not use any information in a way that it is impossible to connect it to a certain project. At a digitally listened company nobody would read the information themselves, only the compared aggregate information. However, it looks obvious that the digitally listened companies share much more information, their data might be in a much more confidential environment than today.

Similar projects and text mining based on semantic similarity

The European Comission highlights (EC, 2017, p. 21.) that many companies run similar research projects. By doing so the R&D investments are duplicating however only one of them will reap the benefits. This deter companies from investing into R&D.

Who has the time be up-to-date with hundreds of university projects and thousands of companies? It is impossible. The datasets of the funded projects are not merged. But we can automatize the comparison of the projects by assessing semantic similarity.

The actual routine doesn’t even contain filtering the same application. An article in the Nature stresses that „checking grants at other agencies is something that doesn’t exist.” (Reich-Myhrvold, 2013) “In general, agencies do not cross-check federal grants against their own new awards.” (Reich, 2012) “Cases tend to come into light only if peer reviewers spot similarities in grant applications.” (Reich, 2012) This is about NIH, USA, but it is applicable to EU as well.

There are three cases:

• Same researcher same research – fraud in most cases

• Same researcher slightly different research – waste of money

• Different researchers, maybe in different countries – same topic – big opportunity to save public money

The ongoing projects have no publications or patents yet, and there is no common database on them either. So we can be sure, that a big number of researches and innovations are duplication of something already existing. We just simply don’t have tools to discover that.

Recognizing this problem, one tool is under development at the DG Joint Research Center (JRC), which is the science service of the EU. It estimates the semantic similarity of two texts and thus helps applicants and evaluators be aware of similar projects. It is not searching plagiarism, so the goal is not finding the same words in the same order. It aims to reveal the similarity of the content. It looks in three databases: Horizon 2020 (CORDA), publications (SCOPUS) and patents (European Patent Office).

This tool help to reveal the similar projects. Existing similar projects is not necessarily a bad sign which must deter us from funding a project. This is typically a peer review backing and not replacing system.

A model of the data-based decision-making

In the previous pages we presented the debates on the reforming the peer review system. We also showed that big data is a key technology backing this reform. It has already been appeared in policy documents, however, so far there isn’t any theoretical framework to suit the technological puzzles into one process.

We examined three technologies which can be part of a future peer review backing system

– technology rating systems

– digital listening of the companies’ marketizing activities

– text mining

So, the process begins with testing a market. This can be an ongoing activity, until the data shows the way to the market. Then a project can be submitted in a way that all data in the project can be understood digitally and can be compared with other projects. Than the peer review comes, and it is backed with an automatic analysis of semantic similarity with other projects and publications. Based on these data the peers can decide.

Then as much data should be published openly on the good projects as much possible, and the companies should be urged to regularly publish their successes openly. Thus, they can find investors, cooperators, who will rely on the decision made at a funding agency, as they use the biggest database available. Finally, it is important to continue the data gathering for a long time, years after the project is finished, and check weather the decision made led to a real economic success. Most of the funding agencies collect data for some years on the projects, so there is nothing new in it.

As you can see, all of the elements of the process already exist in one way or another. What doesn’t exist is to bring all the datasets under one framework software, and to extend the data collection to all data that are possible to be collected automatically.

Finally, it is out of our scope, but we have to mention that you have to make the companies accept such a system. Data-sharing should be part of the culture in the innovation system. As the use of personal data in advertising industry became accepted, we assume that this model can gain acceptance as well.

Conclusions

Spending more public money on funding corporate RDI projects not necessarily leads to more profit on these projects. Our decision-making process, the peer review has got many critiques so far. Its renewal is possible based on new technologies. These technologies are connected to digitalization. Some are more mature, some are in the test tubes, but they are waiting to be used.

We proposed three technologies: digital listening of marketization, searching for semantic similarity and technology rating based on previous projects. We outlined the decision-making process which is backing the peer review with these technologies.

We proved that using big data at decision-making can be the future. The earlier we begin to experiment with it, the earlier will it seep into the corporate culture, which is the main obstacle today of moving towards these technologies.

References

Arnold, E. (2004). Evaluating research and innovation policy: a systems world needs systems evaluations. Research Evaluation (13) (1) pp. 3–17, https://doi.org/10.3152/147154404781776509

Blank, S. – Dorf, B. (2012). The Startup Owners Manual, K&S Ranch, USA.

Chen, M. C. – Huang S. H. (2003). Credit scoring and rejected instances reassigning through evolutionary computation techniques. Expert Systems with Applications 2003 (24). pp. 433-441.

Danneels, E. (2002). The dynamics of product innovation and firm competencies. Strategic Management Journal 23 (12) pp. 1095-1121.

Davidson, P. – Henrekson, M. (2002). Determinants of the Prevalence of Start-Ups and High Growth Firms. Small Business Economics 19 (2). pp. 81-104.

De Fraja, G. (2016). Optimal public funding for research: a theoretical analysis. The RAND Journal of Economics (47) pp. 498–528. doi:10.1111/1756-2171.12135

EC (2017): The Economic Rationale for Public R&I Funding and its Impact; European Commission DG Research and Innovation. Eu Publication. https://op.europa.eu/en/publication-detail/-/publication/0635b07f-07bb-11e7-8a35-01aa75ed71a1/language-en/format-PDF (retrieved on 10.11.2020)

EC (2017/2): FAB-LAB-APP, European Commission, DG RTD; ISBN: 978-92-79-65270-7

Ellis, S and Brown, P. M. (2017). Hacking Growth; Crown Business, USA.

Guan, J. C., Zuo, K. R., Chen, K. H. and Yam, R. C. M. (2016). Does country-level R&D efficiency benefit from the collaboration network structure? Research Policy 45 (4) p. 770-784.

Hartigh, Erik. (2018). Company Innovation System: A Conceptualization; International Association for Management of Technology (IAMOT) 2018 Conference Proceedings Birmingham, April, 2018. https://www.researchgate.net/publication/336878380_COMPANY_INNOVATION_SYSTEM_A_CONCEPTUALIZATION (retrieved: 11.11.2020)

Hather, G. J., Haynes, W., Higdon, R., Kolker, N., Stewart, E. A., Arzberger. P., et al. (2010). The United States of America and Scientific Research. PLoS ONE 5 (8) e12203 https://doi.org/10.1371/journal.pone.0012203 (retrieved: 16.11.2020)

Hemert, P., Nijkamp, P. and Masurel, E. (2012). From innovation to commercialization through networks and agglomerations: analysis of sources of innovation, innovation capabilities and performance of Dutch SMEs. The Annals of Regional Science 2012 (50), pp. 425–452.

Jaruzelski, B., Dehoff, K. and Bordia, R. (2005). Money Isn’t Everything in: Strategy+Business, 2005 (41) https://www.strategy-business.com/article/05406?gko=ce6a6 (retrieved: 21.11.2020)

Johnson, V. E. (2008). Statistical analysis of the National Institutes of Health peer review system. Proceedings of the National Academy of Sciences of the USA 105: 11076–11080.

Jones, C. I. (1995): R&D-Based Models of Economic Growth. Journal of Political Economy 103 (4), pp. 759-784.

Karlsson, C. – Andersson, M. (2009). The Location of Industry R&D and the Location of University R&D: How Are They Related?. in: Karlsson et.al.: New Directions in Regional Economic Development; Springer, Berlin, Heidelberg; p. 267-290; ISBN: 978-3-642-01016-3

Lepori, B.,Benninghoff, M., Jongbloed, B., Salerno, C. and Slipersaeter, S. (2007). Changing models and patterns of higher education funding: Some empirical evidence. In A. Bonaccorsi & C. Daraio (Eds.) Universities and Strategic Knowledge Creation. Specialisation and Performance in Europe (pp. 85-111). Bodmin, Cornwall: MPG Books Limited.

Lichtenthaler, U. (2005). External commercialization of knowledge: review and research agenda; International Journal of Management Reviews 7:231-255.

OECD (2015). Daejeon Declaration on Science, Technology, and Innovation Policies for the Global and Digital Age. OECD Publishing, Paris.

OECD (2016). STI Outlook, Policy Profile. OECD Publishing, Paris.

Reale, E. (2017). Analysis of national Public Research Funding, Publications Office of the European Union, Luxembourg doi:10.2760/19140

Reich, E. S. (2012). Duplicate-grant case puts funders under pressure. Nature 482 (7384), p. 146.

Reich, E. S. and Conor L. M. (2013). Funding agencies urged to check for duplicate grants. Nature 493 (7437), p. 588-589.

Ries, E. (2011). The lean startup, Crown Business, New York.

Romano, L., Trau, F. (2017). The nature of industrial development and the speed of structural change. Structural Change and Economic Dynamics, 42 (C), pp.  26-37.

Romer, P. M. (1990). Human Capital and Growth: Theory and Evidence; Carnegie-Rochester Conference Series on Public Policy: Unit Roots, Investment Measures and Other Essays, Vol. 32, pp: 251-286, Spring 1990

Schenk, E. – Guittard, C. (2011). Towards a characterization of crowdsourcing practices. Journal of Innovation Economics 1 (7), pp. 93-107.

Shibata, N., Kajikawa, Y., Takeda, Y. and Matsushima, K. (2008). Detecting Emerging Research Fronts Based on Topological Measures in Citation Networks of Scientific Publications. Technovation 28 (2008) p:758-775.

Sitenko, D. and Vasa, L. (2018). The projects of the Industrialization Map as the main tool of implementation of the program of industrial and innovative development of Kazakhstan. International Journal of Economics and Project Management 1 (2) pp. 43-51.

Sohn, S. Y. – Kim, J. W. (2012). Decision tree-based technology credit scoring for start-up firms: Korean case. Expert Systems with Applications 39 (4), pp. 4007-4012.

Solow, R. (1957). Technical Change and Aggregate Production Function. Review of Economics and Statistics 39 (3). pp. 312-320.

Tóth-Haász, G., Baracskai, Z., Dõry, T. (2019). Understanding aspirations: R&D project evaluation by knowledge-based systems. In: Ibrahimov, Muslim; Aleksic, Ana; Dukic, Darko (szerk.) Economic and Social Development: Book of Abstracts. pp. 140-141.

Van der Meulen, B. (2010). Evaluating the societal relevance of academic research: A guide. ERiC publication 1001 EN. Delft, The Netherlands, Delft university of Technology https://repository.tudelft.nl/islandora/object/uuid:8fa07276-cf52-41f3-aa70-a71678234424?collection=research (retrieved on 30.10.2020)

Vanino, E – Roper, S – Becker, P. (2019): Knowledge to money: Assessing the business performance effects of publicly-funded R&D grants. Research Policy 48 (7). pp. 1714-1734.

Vasa, L., Darabos, V. and Kelemen-Hényel, N. (2012). System and effects of financial incentives for innovation – a case study from Hungary. Vestnik Keu: Economics, Philosophy, Pedagogies, Jurisprudence 25 (3) pp. 13-19.

Vasa, L. and Sitenko, D. (2011). R&D activity of the Hungarian SME sector and the European paradox of innovation. Saiasat-Policy: Informacionno-Analiticheskij Zhurnal 2011 (3) pp. 27-32.

Vasa, L. (2010). Egy lehetséges kitörési pont: innovatív vállalkozói környezet [A possible breakout point: the innovative entrepreneurial environment, in Hungarian]. Harvard Business Review (Hungarian Edition) 2010 (5) pp. 34-41.

Vasvári, B., Mayer, G. and Vasa, L. (2020). A tudományos és innovációs parkok szerepe a tudásgazdaság és az innovációs ökoszisztéma fejlesztésében [The role of science and innovation parks in the development of innovation ecosystems, in Hungarian]. Tér-Gazdaság-Ember 2020 (2) pp. 95-109.

Wang, Y. – Li, J. – Furman, J.L. (2017). Firm performance and state innovation funding: evidence from China’s Innofund program. Research Policy 46 (2017), pp. 1142-1161

SPALLER, Endre
Széchenyi István University, Hungary

DÔRY, Tibor
Széchenyi István University, Hungary