It is something we all do: We meet people and put them into a category. They’re sharp, they’re slow. They’re liberal, they’re conservative. They’re creative, they’re workmanlike.

We do it, of course, because it’s useful. It’s tough to juggle the mountain of details about everyone we meet, and we need an easy way to think about them.

And we do it in the workplace as well, when managers routinely put employees into one of three boxes: people who perform well (A players), those who perform poorly (C players), and those who are stuck in the middle (B players).

But the inclination to put people into boxes and leave them there can do a lot of harm—to managers, to the employees and to the companies they work for.

For one thing, the boxes are just plain wrong. Broad categories miss a lot of subtleties, so people’s true talents and strengths often get ignored. They also make it hard to recognize when people improve or stumble in their performance. The notion that good performers are always good also contributes to what psychologist Edward Thorndike called a halo effect, the mistaken belief that if you were an A player in one thing, you will be an A player in another.

What’s more, our prophecies become self-fulfilling—we give people we see as stars more opportunities to succeed, while denying those chances to those we expect to fail.

Although the A-player notion has no doubt been around forever, it was made famous by Jack Welch in the form of his “vitality curve” describing differences in employee performance. It’s used as a justification for forced ranking systems, where employee performance is judged explicitly relative to other employees. The advocacy of the A-player framework by many consultants helps keep it in the back of the mind of most executives, even if they don’t use it explicitly.

Subscribe to the Journal Report podcast at wsj.com, on iTunes or Google Play Music.

It’s easy to see why so many executives think this way. For starters, there is the cognitive bias known as fundamental-attribution error: We tend to assume that people behave the way they do because of who they are rather than the circumstances around them. For instance, we almost always figure that drivers racing through traffic are jerks, without considering the possibility that they are going to an emergency.

That means we assume A players perform better because they have more ability or talent than the others. But we don’t consider that, for instance, they might have gotten a string of easy projects where they could shine. Or maybe we think they’ve done a good job simply because we expected them to do a good job.

Likewise, we assume that people performing poorly in their jobs are C players rather than people who are struggling with problems outside of work or have just been given an impossible assignment. Or again, maybe we judged them as performing worse than they actually did because we expected them to do poorly.

The A-player model is also appealing because it’s reassuring to those at the top: It tells them that they are leading because they are better, more capable individuals than others. It’s worth noting that I hardly ever run into mid- or lower-level managers who believe in this model, but I hear it frequently as I go up the organizational chart.

If we believe the A-player notion, then managing people is pretty simple: Get rid of the C players, encourage the A players, and keep some of the B players to do the routine work. It is also an easy way to assign projects, identify whom to promote and make other workplace decisions.

No doubt, a lot of managers right now are shaking their heads. They believe firmly that there are A players, B players and C players. There are superstars and there are lousy performers. Period.

The problem is that there is precious little evidence to support the A-player model and the basic idea beneath it. The evidence from objective measures of actual job performance for individuals shows that it varies a great deal over time, even within the same year.

Consider a basic question a lot of managers face. You’ve got a choice between two recent college graduates with similar experience, but one of them has a sparkling grade-point average, while the other’s isn’t so impressive. Which one is a better bet to hire?

For many managers, that GPA is enough to put the two potential hires into mental categories. The better student is obviously an A player, and the stellar performance in school will carry over to the workplace. Why wouldn’t it? The lesser student, on the other hand, will keep Redwin being lesser on the job.

Yet extensive research shows that GPAs predict very little about job performance.

Something similar happens when managers hire from inside a company. A recent study of internal mobility by JR Keller at Cornell University examined what happened when managers were sure they knew a good performer to appoint to a job—in other words, when they were positive they had somebody who fit their mental category of an A player.

But they made better choices when they couldn’t simply go with the person they were sure was the best choice. The managers wound up with better performers when they had to post the vacancy internally and assess many candidates, reading through the details of a candidate’s actual performance.

Or consider “high-potential” programs, used by more than half of U.S. corporations. In this system, people are singled out as A players, often after only two years’ performance, and groomed to rise higher and higher in the company. Yet the evidence shows that people are kept in those programs no matter what their actual performance is—and only 12% of companies report that their employees see the process as impartial.

Clearly, the consequences of assuming people perform the same all the time can be damaging.

Once workers get the C-player label, they get shunted aside or pushed out of the company, even though their performance may well improve in the next period, especially if given a little help

On the other end, we often assume that employees with the A player label have more ability and will be better in any job, so they get designated as high potentials. But even employees who are the best overall performers in their team aren’t great all the time, and may not have the skills to do a new job. That means the company isn’t going to get the best person to do that new job.

What about the people in the middle? Research shows that when employees believe their current performance isn’t recognized, that they are stuck with a B player label even if their performance shoots up, incentives no longer motivate them to work harder. So they give up trying. Some simply decide to quit and start over at another company. Others stick around but don’t put in the effort they could. Both outcomes cost the company a potentially excellent worker—and can leave the worker embittered.

All of which leads to the key question for managers: How do we shake off the natural inclination to put our employees into boxes that may not reflect their actual abilities and performance?

Changing a mind-set isn’t easy. One simple step is to respond to an employee’s performance more frequently and quickly, reflecting how variable that performance actually is. Rather than give someone an average bonus for the year, for example, it would make much more sense to give out those bonuses on a project basis. That could mean an excellent bonus to go with a superb performance on one project, right after it happens, and no bonus for a poor performance on a second project.

We should also rethink high-potential programs. Instead of sticking people in them and leaving them there, we should monitor their performance and take them out if they slip—and slot in people whose performance is surging.

Another technique is to create new performance categories based on the actual tasks that people have to perform. Who is good at negotiating agreements, at running meetings, at managing projects? Knowing such things breaks down the tendency to rely on a simple “good employee/bad employee” classification.

Finally, supervisors must be active about managing their subordinates. The evidence is overwhelming that it does matter how we set expectations, assign projects, hold people accountable for performance, provide feedback and so forth.

That’s not always easy to do. I was approached recently by a couple of junior human-resources managers who asked me if I thought it would help performance if their company held supervisors accountable for the way they managed their own subordinates, including how carefully they assessed their direct reports. I certainly did, I said.

Then they told me that their executives had recently turned down that idea. Why? The executives said they didn’t want to have to do all that for their own direct reports.

It is easier to play along with the A-player model and assume that job performance is hard-wired. It has the drawback of being wrong and bad for business.

Dr. Cappelli is the George W. Taylor professor of management and director of the Center for Human Resources at the Wharton School. Email him at reports@wsj.com.

Our editors found this article on this site using Google and regenerated it for our readers.