Key figures are an ongoing issue in idea management, have been used for several decades and are once again a hot topic. The reasons for this are obvious. You want to set performance benchmarks, define goals, investigate whenever things aren’t going so well, and measure the success or failure of idea management.
While key figure systems with more than 30 different key figures were once nothing usual, nobody goes down this mistaken path today. Less is clearly better. In allusion to the “big five” (elephant, rhino, buffalo, lion, and leopard) of the South African safari, idea management also found its “big five” a long time ago:
Benchmarks are offered from several sides in idea management. When companies make their key figures available, they can usually expect an evaluation that reflects their position compared to other benchmarking participants. Of course, this can only work if the requested values are interpreted consistently by all participating companies.
In 2012, the Zentrum Ideenmanagement (Idea Management Center) in the German Institute of Ideas and Innovation Management (hereinafter referred to as ZI) made a valuable contribution towards establishing standard definitions. In expert circles in which target Software Solution (target) played a decisive role, it was possible to present a concise key figure definition for idea management/employee suggestion program on two A4 pages and the equivalent for CIP on one page. One highlight was the certification of a small number of software manufacturers. In its capacity as independent testing facility, TÜV Rheinland was responsible for creating a test catalog and carrying out the corresponding tests at the software manufacturers. target was awarded the corresponding certificate in January 2012 following successful completion of the test.
In the following, we use the ZI definitions from 2012, unless expressly stated otherwise.
Quotas that matter to us here have not been explicitly defined. A quota arises if a key figure is set in relation to the number of employees, ideas, or the like. Based on the number of ideas submitted by a company, it is impossible to determine whether this is a relatively large or small number of ideas. You can only evaluate this if you know how many employees can submit ideas. Our observation period is always one calendar year.
Idea quota (IQ)
This quota stands for “mass”, in other words, the number of ideas submitted in one year in relation to the number of participants who can participate (potential participants). When we talk about employees in the following, we always mean the potential number of participants. An IQ of 1.0 therefore means that, on average, one idea is submitted per employee per year. The unit is ideas per employee. To make things simpler, however, we often omit the units.
Many companies have not yet managed to exceed an IQ of 0.5. Values well above 1 are achieved in companies with a high number of submissions.
If idea quotas greater than 10 are given, CIP actions are generally taken into account. Otherwise, we can assume that this is a matter of quantity, not quality.
Participation quota (PQ)
This quota represents the number of employees who participate in one year (or rather, how many submit a new idea), in other words, the level of awareness or attractiveness of idea management.
Average values are often in the range of 30%. Only a handful of companies achieve values above 50%.
The idea and participation quota are leadership and cultural issues. They indicate how well idea management is accepted and the level of motivation of employees.
Incidentally, the key figure does not provide any information on the number of employees reached over a longer period of time. Year after year, it may essentially be the same employees who participate in idea management without any new ones getting involved. It is also not possible to establish a strict dependency on the idea quota. A high participation quota always means a higher idea quota, but the opposite may not be true: If a company has only a few submitters, each of whom generates many ideas ("repeat contributors"), the idea quota may be relatively high, but the participation quota is very low.
Realization quota (RQ)
This quota represents the number of closed ideas which were realized, in other words, how many of the ideas were found to be “good” and implemented. It therefore says something about the quality of the ideas.
In practice, the values often vary by around 50%.
Benefit quota (BQ)
There are different variations of this quota. We prefer the following: The benefit quota represents the calculable net benefit of the implemented ideas per employee. It is also conceivable to replace net benefit with gross benefit, which would make the quota appear larger. In the case of an average implementation cost of 10% of the gross benefit, the “net benefit quota” is 10% lower than the “gross benefit quota”. At least in terms of scale, the two variants will deliver similar values in practice.
The realization and benefit quota define the effectiveness of idea management. The benefit quota provides information on the end result in financial terms. This makes the key figure one of the most effective weapons in discussions with “non-experts”. Money is a universal language. Leadership, culture, psychology – that’s where things start to get difficult.
Here, too, there is a high degree of fluctuation in practice. Values peak at over €1,000 per employee per year.
Instead of using the reference "per employee", we can also use "per closed idea" or "per implemented idea". However, with the definition "per employee", the total benefit of idea management can be extrapolated more intuitively because the number of employees may be assumed to be known, but not the number of closed or implemented suggestions.
Do the variants "per employee" and "per closed/implemented idea" also deliver values of a similar magnitude? Not at all! Take, for instance, a company with 1,000 employees and 500 closed ideas per year, 250 of which were implemented. The total benefit from the implemented ideas amounts to € 1 million.
This results in a benefit of € 1,000 per employee, € 2,000 per closed idea, and € 4,000 per implemented idea.
In other words, the values of the benefit quota vary considerably depending on the definition variant. It is therefore essential to ask which definition was used when companies wish to compare themselves.
A partial dependency exists between the realization quota and the benefit quota. A high realization quota will usually also result in a good benefit quota. However, the reverse relationship does not always apply. In organizations in which implemented proposals can have very high benefits (for example, military, armaments industry), a small number of proposals are sufficient for an excellent benefit quota.
Processing time quota (TQ)
There are a wide range of definition variants for processing time quotas, which may at first appear surprising.
The simplest variant defines the average processing time in the most obvious way as the sum of the processing times per idea (days from submission to closing) divided by the number of closed ideas. Everyone understands this immediately. However, it does not match our intuitive feel for processing times. This is explained in the following example:
In a company with excellent idea management, things suddenly aren’t running as smoothly anymore. Idea management becomes a low(est) priority. Managers and experts no longer feel as motivated as before or they are no longer given the time to process ideas. Only a few activities are running which do not require any effort, e.g. closing ideas which obviously cannot be implemented. That takes no time at all. The average processing time – considering only closed ideas – is not bad at all. However, we have not even made a dent in the mountain of open ideas. We feel like not much has been processed or that the processing quality is poor.
Do you need more convincing, because this is far too theoretical and not practical enough? In that case, let’s take another example.
A company strongly promotes idea management. Managers and reviewers are highly motivated and finally tackle the many “long-running ideas” which have either not been decided upon or not implemented since forever. All of a sudden, things start to improve, and submitters are saying: finally something is happening, our ideas are being implemented or rejected, and we now know where we stand. The average processing time – considering only closed ideas – is abysmal, and much worse than the year before. It is true that there are many closed ideas. If you increase the denominator, you reduce the key figure, so far so good. The problem is the numerator: many individual ideas that are included in the calculation with a very long runtime.
The argument "But in quiet times, that's a reasonable definition." does not have the desired effect. You can do without a key figure if it doesn't show changes. If it even shows the wind-of-change in a meaningful way, the key figure is harmful.
When assessing the processing quality, the size of the mountain of pending ideas must also be taken into account. A "provision" has to be created for this, as it were, because it is included as an obligation in the new reporting year.
The definition of the key figure for the processing times should therefore be changed or at least supplemented by a further key figure and considered in an extended context in order to take into account the still pending, unfinished idea.
The ZI made such an attempt in 2012 with the following definition:
“Average of the calendar days of all proposals submitted in the current reporting year and all open proposals from previous reporting years, which are needed from the date of submission to the closing date. The closing date must be in the current reporting year. Closing means that the proposals are realized or rejected in the reporting year. Proposals still open on 31.12. of the reporting year are included in the calculation together with the calendar days from submission up to 31.12..” (End of quotation)
The part in italics seems incomprehensible at first. In this case, the attempt is expressed to consider the still open, unfinished ideas in the processing time. If there are an above-average number of “long-running ideas”, this is expressed in this definition variant with correspondingly longer processing times. However, the “disadvantage” of this variant is that the average processing time is reduced when processing conditions are more or less constant.
In practice, no one has paid any attention to the ZI’s definition from 2012. It was no longer used by the ZI itself in 2016. Instead, the ZI supplemented the above key figure “Average processing time of closed proposals” with the additional key figure “Average processing time of proposals pending at the end of the year”. This additional key figure records the average “age” of proposals still pending. The key figure thus provides an important indication of how well ideas are processed. If the mountain of unfinished ideas gets bigger and bigger, the key figure rises, and if the “legacy burden” is dealt with, the key figure falls.
Another variant attempts to measure the processing time until a decision is made, in other words, to keep the time for implementation out of the calculation.
Relationships between the quotas
Interesting analyses enable you to compare the quotas in two dimensions, with IQ or PQ forming the y-axis and RQ or BQ the x-axis.
The efficiency is measured on the horizontal axis, e.g. the benefit, and the commitment is measured on the vertical axis, e.g. the idea quota.
Once all the points of the companies participating in the benchmark have been recorded, the graph is divided into four quadrants of equal size. The individual quadrants can be characterized as follows:
Top right, IQ and BQ high: quantity and quality; these are the champions.
Bottom right, IQ low, BQ high: quality with little quantity; a maximum result is achieved with minimum effort. Can be quite smart, but also risky.
Top left, IQ high, BQ low: quantity with little quality; these are the climbers, companies with potential. The most important thing is correct: motivation. There is still room for efficiency.
Bottom left, IQ low, BQ low: the losers.
The open and honest analysis of your own current situation enables you to devise a strategy for how it can be improved.
Which key figure is the most important?
This is a somewhat naive question to which there is no simple answer. But there are some pointers to help you.
With a low idea quota, the recipe is simple: consider all possible ways and measures to increase it.
If the benefits are high, it is quite possible that companies are prepared to reduce the number of measures to increase the idea quota in order to minimize the process effort. This step requires a great deal of experience and stable, good results over several years. It is not without risk.
Summary so far
When defining the idea quota, participation quota and realization quota, there is little you can do wrong. It is advisable to define the benefit quota and processing time quota carefully, as different variants are used which can vary considerably. Those who compare quotas without first asking for the definition have already lost control of their idea management.
The five quotas presented in the correct definition variant are sharp swords for as-is analysis and target definition. Thus the "Big five" become the "Beautiful five".
Caution: there are plenty of pitfalls. You should be aware of them to avoid falling into them yourself. “Do not trust statistics...”.
The statements above can all be applied to internal benchmarking as well. This means that the individual departments, main departments, etc. are assigned their respective key figures, thus enabling a comparison to be made on the different hierarchy levels.
A prerequisite for meaningful handling is, of course, that idea management can always access the latest organization chart. target customers can sit back and relax. Since the software can access SAP data directly (employees and organizational data), no interfaces need to be provided and supplied.
In contrast to external benchmarking, however, there are still some special features to consider in internal cases.
The first question to be asked is how to assign an idea that has been submitted by several employees. Can it only be assigned according to the first submitter? Or are the departments of the respective submitters considered according to the degree of participation? There is no right or wrong answer here. Once a definition has been made, it must be applied consistently to the remaining key figures.
Between the submission date and the closing date of an idea, organizational changes may have taken place which affect the submitters. For example, an idea is submitted by an employee from department A. When the idea is closed, the employee belongs to department B. To whom is the benefit "credited"? Here, too, different variants can be depicted. We prefer the assignment to the department on the closing date, in other words, department B. If major organizational changes have taken place, department A may have ceased to exist. It makes no sense at all if "corpses" still generate benefits.
It is always necessary to ask how access to statistical information is to be regulated. The idea manager needs all the information, with drill-down capability from the main department to the subordinate organizational objects. This makes it extremely easy to analyze where things are going well or not so well in the company. It is usually sufficient for the employee to see certain quotas at the level of the entire company, e.g. during the course of the year compared to the previous year. This is also suitable for the manager. However, the manager should see his entire organizational tree, with drill-down to subordinate units, in order to create transparency here.
Of course, not all managers or employees can expect to understand the key figures correctly. As a result, attempts are made to find a much simpler benchmark for the value of idea management in a single key figure that best takes all the quotas mentioned into account in some form.
At the consumer safety group Stiftung Warentest, television sets are rated in detail according to price range, picture and sound quality, connection options, and power consumption. Anyone who wishes to can immerse themselves in these details, but most consumers will only base their purchasing decisions on the overall rating. And, of course, based on the price.
This "one" key figure for idea management can be defined by each company depending on its own goals, as will be explained in more detail below.
The dib points (DIB refers to the German Institute of Business Management) were incidentally an attempt to use the "one" key figure in external benchmarking as well. Unfortunately, the matter was more reminiscent of the summation of meters, liters and kilogr.
“One number” reporting
Each company can create its one number reporting (ONR) key figure according to the following blueprint.
MFirst consider the quotas you want to optimize primarily. In our example, these might be the idea quota (IQ), the participation quota (PQ), and the realization quota (RQ).
The next step is to define the target figures.
In other words, IQ(target) = 1, PQ(target) = 50%, RQ(target) = 80%. To simplify matters, let’s assume that the same target values are to be used for all departments of the company.
The quotients IQ(actual)/IQ(target) etc. are dimensionless numbers which can be combined arithmetically in any way.
A value PQ(actual)/PQ(target) of 1 for a department means that in this department the participation quota corresponds exactly to the target value, a value of <1 means that the actual participation quota still falls short of the desired target value.
Now consider whether all key figures are to have the same value.
In our example, we assume that the company would like to focus in particular on the participation quota. The contribution in this respect should be doubly weighted.
We can now define our key figure for ONR arithmetically as follows: K = IQ(actual)/IQ(target) + 2 times PQ(actual)/PQ(target) + RQ(actual)/RQ(target)
If a department meets all quotas according to the target, then K=4.
If the target is exceeded, values greater than 4 can, of course, be achieved, but this is probably more theoretical, or depends on how generously the targets were set.
Using a German “Bundesliga” table (in descending order according to the value of K), it is now easy to see which departments are in the running.
Everyone understands that a value K=2 is rather modest, whereas a value K=3.8 is top class. Thus, a simple, easy-to-understand tool has been found to offer insight into the effectiveness of idea management.
Further information is available for target customers in the “my target” area (login required).