Why give prizes




















Or it could stack, adding more value if the customer continues to refer people. The same theory applies to contests as well. You can offer an additional ballot for the contest you're running, each time they get someone new to enter the draw. Or offer customers a referral code, which they can share with their friends and family.

You'll be surprised at the lengths that customers will go, in order to get those extra few ballots for contest prizes they really want, or to earn a small discount code they can apply to their next purchase. Best of all, you won't have to do anything to reach these referred customers directly, because your existing ones will do the leg work for you.

Just be sure that your referral program reward structure benefits both the referrer and the referee, if you want to see the greatest results. Putting the same quarterly contests in place might work for some businesses, but for most, continuing to offer the same prizes over and over can actually cause the opposite effect. Clients begin to lose interest all together with your prizes, because it's the 'same old, same old'.

Even someone who's won before, likely won't get nearly as excited about vying for the same prizes a second time. Some may not even apply, which can really restrict the momentum you've been building in that customer relationship. Remember: your prizes don't have to be massive or expensive to create excitement.

This is a great way to start thinking seasonally about your contest prizes. You can drive demand for your prizes by finding ways to tie-in Seasonal Promotions Christmas, Valentine's Day, Halloween, etc. One useful idea here is to find seasonal prizes that have the widest range of appeal, beyond just moving some excess inventory. They'll pick a winner periodically for a free lunch.

Why not take that idea a step farther? Create actual ballots that visitors can fill out in your online community, your website, or at the door before entering. Then do a draw at the end of January for a romantic, three-course dinner for two, sometime around Valentine's Day.

Everyone loves a free meal, but you're giving the prize winner more than that. You're giving them an experience as one of their prizes. You're also saving them from having to plan Valentine's Day themselves. You'll be thankful later, when you're planning your next Email Marketing campaign. Looking back at our car dealership example from earlier, you can see how this is a great promotional idea that doesn't necessarily have to cost a lot. By providing service prizes instead of strictly product prizes, you reduce the costs associated with running the contest and increase your ROI.

In this instance, the customer would see these prizes as substantial because they can win services that would normally cost them hundreds of dollars a year. For the dealership, the cost is substantially less than that.

All the service prizes are provided in-house, which means the overhead costs are negligible. Plus, because these prizes are services and not products, the only real incremental cost accrued by the dealership is labor, which are marginally less than product prizes and more easily rolled into regular operation costs. While the service prizes actually seem larger and more expensive than the year's supply of air fresheners, the reality is that if it's managed properly, the service prizes would actually cost the dealership less overall.

This means greater ROI for the business and better customer perception of the prizes being offered. Together, these factors are a recipe for strong customer loyalty and economic growth. Cash and other monetary incentives are more powerful in getting response rates to go up. And the more you offer, the more responses you get. To decide how much you need to spend, consider your budget constraints and the target population.

For example, professionals and students place different levels of value on their time, and getting the former to respond will cost you more. If you choose a non-monetary reward, check that it is appealing to your respondents. The respondents receive their rewards before they even complete the survey. Only the respondents who have completed the questionnaire receive the reward. It may sound counterintuitive, but research shows that prepaid incentives increase response rates more effectively than promised ones.

Obviously, this is a more costly method since you have no guarantees that everyone who gets a reward will complete the questionnaire. It is also harder to implement with online surveys than a promised reward, which can take the form of a gift certificate delivered by email or a gift sent by the mail. You can use a sweepstakes, a raffle or lottery to award the actual prize to a smaller number of people.

You give a reward to every survey participant. Individual incentives offer a more direct compensation to every survey taker, but sweepstakes can be an option when your budget is limited.

However, there are a couple of complications. First, their effectiveness has not been proven. They provide you with an alternative that can increase response rates without encouraging satisficers. Instead of rewarding the respondent directly, you offer to make a charity donation in their name. As you can see, survey prizes are an effective tool that can help survey creators after some careful planning and evaluation of the pros and cons.

For sure, they are a tool that carries some risks but also rewards. Products Surveys. Specialized products. All this led me to reflect on the point of such activity.

And what a lot of activity: We have just completed the annual ritual of Nobel Prize announcements. Wikipedia lists several hundred famous prizes given more or less annually.

But there is an anti-prize sentiment in the air. Technical experts can be valuable for grappling with the science and technology underlying the problem. Academics and industry representatives can be highly useful for evaluating the time, expertise, and expense needed to solve certain kinds of problems.

Designers and strategic thinkers can help refine and reframe problems in ways that are conducive to prize-based solutions. Finally, a gifted facilitator can help to ensure that these different types of professionals have the right conversations and make progress toward a workable problem statement.

All manner of problems may be amenable to prize-based solutions, if defined properly. For another component of the prize, USAID identified the unpredictability of genocide as the problem. This led the agency to seek algorithms that could forecast potential hot spots based on socio-political indicators and historical trends. As designers begin to prioritize specific areas of the problem for research, they can also begin evaluating what combination of potential solutions may best achieve their desired outcomes.

An example from CoECI emphasizes this point. Every year, fraud, waste, and abuse in the health care industry accounts for hundreds of billions of dollars in losses.

Given the challenges associated with identifying fraud, the partners took time to define the problem, which focused on how current software systems could not effectively screen risk scoring, validate credentials, authenticate identities, or sanction checks. To tackle this problem, they launched the Provider Screening Innovator Challenge, which sought screening software that could help ensure that medicaid funds are not spent fraudulently.

As a result, the partners were able to obtain an ecosystem of solutions based on submissions from more than 1, participants from 39 countries. The software applications developed as a result of the challenge series are being compiled into an open-source solution for the state of Minnesota—and perhaps the nation. Problem definition discussions inevitably raise important questions about which approach—a challenge, a prize or some other mechanism—can generate the best solutions. Experienced prize designers have learned that incentive prizes are not appropriate for every type of problem and are not a silver bullet even for the right problems.

Prizes should not be used when there is a clear, established, effective approach to solve a problem. Prizes should not be used when potential participants are unwilling or unable to dedicate time and resources to solve the problem. For instance, as appealing as start-up companies may be as prize participants, they are rarely able to shift their commercial focus to a challenge. Prize designers need to understand the risk tolerance and capabilities of their potential participants before committing to the use of a prize that requires their engagement to be successful.

Prizes should not be used when there are only a limited number of participants who can address the problem. If the universe of participants is small and known, then a prize may not be necessary. Experienced designers often combine prizes with push mechanisms to achieve their goals more quickly and effectively. For example, in the Army Research Laboratory ran five prizes that successfully identified new methods for generating energy from a walking hiker and new ways to produce potable water for humanitarian missions.

The winning solutions came from individuals from around the globe—many of whom would have not had the opportunity to work with the army through other means. The Army Research Laboratory plans to continue developing these ideas through traditional push mechanisms such as testing at laboratory facilities and future small business funding opportunities.

Despite the fact that extensive consideration may be required to determine the suitability of a prize, this preparatory requirement has not put a damper on experimentation in the past five years. Many agencies, such as NASA, embrace prizes and translate their growing confidence and experience into policies that codify and explain their problem-solving strategies. This work can be immensely helpful for less experienced organizations considering similar approaches.

Most experienced designers consider prizes to be just one important problem-solving approach in a larger portfolio that includes challenges and other, traditional approaches as well.

In other cases, the agency uses traditional contract arrangements to implement designs solicited from prizes. US government prize designers, in particular, must look carefully at the legal constraints they face. Typically, this involves early liaison with general counsel to avoid unwelcome surprises. For federal agencies, several laws can affect incentive prizes. Building a legal strategy applies to state and city prizes as well, because legal requirements must be considered in light of desired outcomes.

For example, designers of the New York City Big Apps Challenge intended to spur the development of tech businesses and therefore opted to let participants retain the intellectual property rights of the apps they created. The conclusion of a prize also poses legal considerations that should be addressed early in the design phase. Perceptions of faulty evaluation criteria or unfair judging procedures can lead participants to take legal action, especially if the stakes are high.

Committing to the transparency of the judging process and ensuring that participants can view scoring and selection criteria when they register for the prize can ameliorate such issues. In the federal context, the Government Accountability Office recently ruled that it did not possess the legal authority to adjudicate a dispute related to a prize offered by the Federal Trade Commission, despite its well-established ability to do so for contracts.

After reviewing these considerations and engaging in an iterative problem definition process, designers will be ready to begin building a prize. Designing a successful prize can be a daunting task. No one formula is adequate because each prize addresses a unique problem and set of potential participants whose incentives must be carefully understood. Many public organizations do not possess all of the skills and capabilities needed to design an effective prize, such as online platform development or marketing expertise.

In some cases, the necessary abilities involve distinct and highly specific insights into market dynamics or participant incentives. And in almost all cases, designers need help with problem definition, because a poorly defined problem statement can make it extremely difficult to achieve the desired outcomes.

Despite the unique nature of each problem, designers can rely on certain common elements. These can be thought of as ingredients, combinable in various ways to design prizes that generate specific outcomes. All of the elements matter, together forming an integrated and often complex set of strategic choices. How designers assemble and use them is at the heart of prize design. There are many ways, for example, to craft a communications strategy to draw the attention of potential participants to a prize.

But who should develop the communications campaign and its messaging? What channels should be used? How much time and money can be spent on the campaign? How can we measure its success? These are just some of the questions that designers must answer. The strategic choices involved in challenge design can be grouped into five core design elements:. Below we discuss these elements and feature examples of how designers use them to create, implement, and ensure the legacy of their prizes.

Depending on the desired outcome, these phases can be quite variable in terms of length, cost, and demand on resources. They can involve a few or many small contracts for vendor services as well as different types of partnerships.

Most importantly, as each of these phases unfolds, designers learn a wealth of new information about what successful execution will require, with inevitable impacts on resource requirements and timing.

One major resource requirement, of course, is funding for the purse. Even if no one wins the prize, however, its administration costs can be substantial, particularly if the goal is to achieve outcomes that could require significant commitments to marketing, mentorship, and networking. The initiative focuses on spurring collaboration among innovators; it offers no monetary incentives, but instead invests its resources in helping participants develop and scale their solutions.

Furthermore, prize administration involves significant costs that fall into different categories including, but not limited to, labor, technology platform, marketing, events, travel, and testing facilities. Labor costs are involved in developing prize rules, advertising the prize, connecting with participants, administering interactions among stakeholders, judging entries, and evaluating the success of the prize after award. These activities will require a diverse team, with subject-matter experts to develop, advertise, and judge the prize, and experienced administrators to run it.

For example, post-award coaching, technical assistance, and networking were provided in order to continue to spur action following the award.

The technology platform used to facilitate certain prizes also represents a major cost, as well as a critical component for success. Such online platforms can help target the right audiences, enforce rules, and standardize submissions. NASA partnered with the online challenge platform Kaggle, using its leaderboard feature to offer an environment allowing data scientists and mathematicians to collaborate and compete.

For example, the US government has been a key source for providing these facilities. In the Wendy Schmidt Oil Cleanup X Challenge, a Department of Interior testing facility was used to host physical and laboratory testing of finalist prototype designs for high-performing oil cleanup equipment, and in the Progressive Insurance Automotive X PRIZE, the Argonne National Laboratory provided dynamometer testing of the super-efficient finalist vehicles.

Because prizes are still relatively novel, designers must often commit resources to mobilize their own organizations. Most champions are senior executives, but they can be other employees who have the networks and political capital needed to generate momentum. Champions can clear away significant internal barriers by clearly communicating to employees how solutions derived from the prize will supplement and support those developed within the organization.

Finally, designers should expend resources to find partners that can help fund prizes and play various strategic roles in execution. Many designers carefully assess their own internal capabilities to understand the kind of partner support they may need. Many designers believe that partners from the private, public, and philanthropic sectors can help unleash the full potential of prizes.

Through this integration of partners, the challenge resulted in the participation of teams from more than 15 countries. When selecting partners, designers often consider a number of factors, including what control may be ceded to partners in prize administration, and how their brands and support can help the prize succeed.

Evaluation includes a broad set of assessment and measurement activities that occur during every stage of a prize. It involves the initial determination of whether it is likely to be effective and appropriate, assessment of the quality of implementation processes, development of the criteria and mechanisms used to select winners including providing feedback to participants during and after the prize , and evaluation of impact and overall value.

Proper evaluation is critical because it can affect whether participants view the prize as fair, shape the validity of the results, and, thus, ultimately determine its success. Effective evaluation is also an essential input to strong prize management, both to improve implementation processes and to inform decisions about whether to use a prize again.

In the early stages of design, there are two useful evaluative techniques. For example, a monetary reward may prove to be a stronger incentive for some participants than the opportunity for professional networking or coaching. This is also a good time to determine how prize-generated incentives may be influenced by the external environment that is, incentives from other domains, such as the market and other interventions, such as previously existing challenges seeking similar outcomes.

Second, using research and logical analysis, it is important to check whether the planned challenge activities and outputs are likely to achieve the desired outcomes. This evaluative technique includes identifying other factors that would be likely to help or hinder the achievement of these outcomes. The major benefit of this early assessment is that the design can still be changed to address these factors, including adding activities to reduce risks or reinforce positive outputs, such as adding additional elements of a broad program that supports scaling up once the prize has identified winners.

The quality of the implementation processes should be evaluated during and after the prize to determine whether discrete activities were actually successful. For example, some designers undertake special efforts to identify participants with particular characteristics. In some cases, this recruitment involves finding participants with specific technical expertise; in others, the goal may be to engage new and diverse individuals and organizations in the problem-solving space.

In all cases, capturing good information about these processes during implementation can guide efforts to iteratively improve engagement activities for the current prize and provide insight into more effective engagement efforts for future prizes. Similarly, evaluation should include looking for patterns of who initially engages but then drops out or fails to continue through several rounds.

It may be that the prize needs to be redesigned to provide additional support or that the current process is effectively winnowing out those who are unlikely to provide useful ideas or results.

A unique element of evaluation in prizes is defining the criteria used to select winner s. In creating these criteria, designers are shaping how participants will work, preventing unintentional and undesirable outcomes and curbing potential fraud. Appropriate selection criteria are grounded in and consistent with the overarching view of how the prize will generate change or solve a problem. Because the wrong criteria could lead participants to submit solutions that do not actually address the fundamental problem, designers often review their selection criteria repeatedly, working with internal and external stakeholders to anticipate and account for all possible responses.

One helpful practice for designers to follow is to open up draft rules for a period of public comment, as was done by USAID recently for its potential challenge for desalination technologies, by the Department of Energy for its potential challenge on home hydrogen refueling technologies, and by NASA for its various Centennial Challenges. Designers should also carefully consider whether to use quantitative or qualitative criteria , or a mix of both.

When quantitative criteria are not applicable or relevant, clear parameters and appropriate evaluation arrangements become even more critical. In the case of the Prize for Community College Excellence, the Aspen Institute needed to find a way to evaluate qualitative data about US community college performance.

To make this process as rigorous and independent as possible, the institute employs a third-party evaluator that specializes in evaluation criteria framework design and in collecting and analyzing such data to ensure a strong basis for evaluation. To ensure validity and objectivity in the evaluation process, designers should determine who will judge submissions. Expert judging can be effective when the desired solution is highly technical, while crowdsourced voting is valuable when the goal is to engage public participation.

Judges with particular domain expertise can lend credibility to the challenge results and can improve submission quality through formal and informal feedback, if it is built into the prize structure. One of the important elements of high-quality evaluation is to revisit the criteria at the end of the prize and assess whether they were appropriate: Did they lead to the selection of the best winning solution s?

If the winner did not perform well, and some unsuccessful participants seemed stronger, it might be that the criteria were not right or were not operationalized correctly. For example, if simple weighting is used to derive an overall score, a proposal which scores badly on one criterion and well on another might end up the winner overall, even though it was inadequate in a vital area. Another major component of evaluation is measuring prize impact. Designers should develop measurable indicators of success before launching the prize.

Developing measures of success during the design phase can be helpful in several respects. It reinforces discipline in the design team to ensure that design elements link to desired outcomes. And it assists the organization in assessing its overall return on investment. In anticipation of end-of-prize impact evaluation, measures of success can be deployed for intermediate outcomes, such as milestones for building prototypes or website page impressions for raising awareness.

In addition, designers can evaluate other important intermediate outcomes, such as strengthening the community of participants, improving their skills and knowledge, and mobilizing capital on their behalf. Because measures of success can be both quantitative and qualitative, effective evaluation will typically include systems to gather both kinds of data systematically and also capture unexpected data, such as wider impacts of the prize process.

Common approaches include:. Designers should also note that getting post-award data from participants may necessitate building reporting requirements into the prize rules to enforce compliance or allow access. The use of objective, third-party data such as government statistics can increase the credibility of the prize evaluation process, but in almost all cases it is necessary for designers to obtain new data.

The institute then asked the eligible institutions to submit applications featuring data about how they were advancing student learning. There should be an overall evaluation of whether the prize was worth it. This is not a simple matter of comparing the direct cost of running the prize to the value of the solution produced.

In some cases, a prize might have been unnecessary, and the solution would have come about through other means. Measuring changes should not only be limited to positive impacts. Particularly for government agencies, there should be follow-up to explore whether there have been unintended negative impacts of the prize implementation. Return on investment calculations often leave out the wider costs incurred by other parties in the process. In particular, such an analysis would be helpful in checking for wider potentially negative impacts—such as organizations becoming less inclined to participate in prizes because of the low return on their investment.

In addition to measuring the changes that have occurred, there should be some investigation of the extent to which change can be attributed to the prize. Experimental and quasi-experimental designs, involving a control group or comparison group of participants may be feasible in some circumstances, but they are unlikely to be cost effective or ethically acceptable given the human subjects that need to be involved.

Instead, rigorous non-experimental approaches to causal attribution and contribution are useful to identify possible alternative explanations for the impacts, and whether they can be ruled out. These various approaches to evaluation need more than a few simple metrics to track. Designers need to think carefully about what they are trying to assess, when and how, so that they can surface the most helpful insights for their current and future prizes.

Designers sometimes create independent teams to assess the success of their work, as illustrated by the Rockefeller Foundation, which uses an evaluation group to study the impact of its innovation projects.

Motivators spur participation and competition. These incentives should encourage the right participants in the right ways to do the work required by the prize. The prize award itself is, of course, the most visible motivator, encouraging participation and channeling competitive behavior toward the desired outputs and outcomes. Historically, awards have included cash purses, public recognition, travel, capacity building that is, structured feedback and skills development , networking opportunities that is, trips to conferences , and commercial benefits that is, investment and advance market commitments.

Public sector challenges often feature diverse awards. The size and type of award provides designers with important signaling effects and leverage opportunities. Designers typically try to ensure that the purse is commensurate with the magnitude of the problem, the types of participants required, the amount of time likely to be involved in reaching a solution, and the amount of media and public attention desired. Qualified participants are unlikely to compete if the prize offers a small purse but requires a year or more of effort on a hard problem.

Large purses are also more likely to encourage the formation of new teams including both technicians, experts from relevant disciplines, and investors. For prizes seeking outcomes such as development of prototypes, pilots, or market stimulation, this element of design is critical because it helps designers attract outside capital.

Mentorship also can be a motivator and is used increasingly in prize design. Designers can incorporate mentorship in the prize structure, providing participants with access to experts, tools, leading practices, and other resources to accelerate the development of high-quality solutions and support the formation of communities of interest around the problem.

Some designers pair winners with industry leaders to drive post-award momentum. Many designers are developing collaborative environments , enhancing knowledge sharing among participants by developing rules and evaluation criteria that encourage them to work together. But collaboration in prizes is not always useful. Furthermore, while collaboration may be appropriate for achieving certain outcomes, fierce competition can also be useful, particularly for shortening product development timelines.

Designers should carefully evaluate this trade-off between collaborative and competitive motivations when thinking about the best path to a particular outcome. For example, if seeking a new prototype, the intensity of competition may need to be high to accelerate prototype performance on an aggressive timescale.

If, however, the designer is seeking increased engagement among a population, then more collaboration may inspire others to begin participating in the prize. Finally, for certain outcomes, intellectual property rights can serve as a powerful motivator.

Do they want to use the solution in a proprietary manner, require that solutions be made available to the public through an open source license, or just to have access to it in the marketplace? The options range from full retention of rights by participants to full retention of rights by the organization running the prize.

Structure, or prize architecture, is the set of constraints that determines the scale and scope of the prize, as well as who competes, how they compete, and what they need to do to win.

A competition period that lasts too long risks losing participant interest and one that ends too quickly may not give participants enough time to develop solutions. Winner-takes-all prizes can discourage participants with low risk tolerance. Those with well-defined phases and milestones can modulate competition, winnow participants at different stages, and reward only the most innovative solutions. Due to such considerations, successful designers devote significant time and effort to prize architecture.

Eligibility requirements shape the population of participants. Which participants should designers target—individuals, teams, organizations, established institutions, or even political entities such as cities or states? The choice involves at least two considerations. First, given the desired outcome, who is best positioned to solve the problem?



0コメント

  • 1000 / 1000