close
close

Don’t Let My Professors Rank Which Courses You Take – The Varsity

Don’t Let My Professors Rank Which Courses You Take – The Varsity

Since Rate My Professors (RMP) was established as the go-to site for researching and rating professors, it is important to be aware of its role in shaping our perceptions of them.

Accordingly website’s ‘About’ pageRMP allows students to “figure out who is a great professor and who you might want to avoid.” However, I believe this approach is the wrong way to approach the website. We shouldn’t let what I see as a flawed system have too much say in whether we ‘avoid’ certain professors.

In 5.0s and 1.0s, everything is fair game – or is it?

“I’m only giving a 5 to make up for my previous score of 1 – I believe the average of 3 is a much more accurate depiction of the professor’s (corrected) teaching standard,” reads an anonymous comment on the U of T professor’s RMP page. While I commend the student for adjusting his rating, I believe this comment demonstrates how polarizing the site’s numerical rating system can be.

Because the RMP relies on voluntary self-reporting, a 2021 study of the RMP’s data Texas State University He emphasizes that this type of reporting is often biased and open to misrepresentation. It tends to overrepresent individuals with strong views, especially negative ones. As a result, the discourse around the professor and the course becomes distorted by the impassioned voices that dominate the discourse.

This also aligns with my experiences on the site, where I frequently encounter ratings ranging from 1.0 to 5.0 for the same professor. I often wonder what makes a professor deserve such high scores.

In an interview with UniversityFourth-year economics student Dean Locke said RMP ratings were “pretty accurate overall.” However, he thinks that “using other students’ scores to determine how much I want a professor almost feels like treating them like an Amazon product.”

Locke makes a fair point. Numerical rating systems can be quite reductive, especially when evaluating a person. While they provide an at-a-glance indicator that is especially useful during the busy enrollment process—I’ve been there, too—I believe they ultimately discourage students from gaining a deeper understanding of a professor’s teaching style. It may prevent students from giving some professors a fair chance.

What’s in the review?

Fourth-grade math student Lucy Borbash is hesitant to use the site. As Borbash stated in his interview Varsity, They prefer to form their own opinions without being influenced by reviews that might deter them from trying something. When they visited a “beloved” professor’s page, they were surprised by his reviews, including those described as “unreasonable and oblivious to student complaints.”

Borbash said: “In my experience, he was actually very helpful… You could email him and ask him anything, and he would check in with us periodically… But if I had read in advance that he wasn’t supportive of his students, I might have done that. I didn’t ask for (help).” (I didn’t).

I agree with Borbash. If you have no choice but to take a course with a professor with a 1.0 rating, focusing solely on negative reviews may do more harm than good. A. Study published in 2007 Measurement and Evaluation in Higher Education Daily He found that exposure to negative RMP ratings led to negative expectations and thus negative experiences for students.

Also, a Experiment published in 2009 Journal of Computer Mediated Communication found that negative expectations created by simulated RMP feedback led to approximately 10 percent lower exam scores on course material compared to students who read simulated positive comments. The study also showed that the positive group scored higher than students who were not exposed to any RMP ratings. If the ratings are skewed and impacting academic performance, should we avoid the RMP website altogether?

While I don’t believe the site’s ratings and reviews should be the deciding factor in course selection, I do think RMP can be incredibly useful for gauging what to expect from a class. It is important to be mindful of how we use the site. We should put aside excessive comments and focus on information about how the lesson is taught. Additionally, it is important to distinguish between subjective and objective reviews.

Prioritizing comments like “This professor requires weekly exams” over comments like “This professor is a boring lecturer” will be more productive in the long run, both in terms of managing your expectations and setting yourself up for success in the classroom.

Athen Go is a fourth-year student studying architecture, English and visual studies. He is the editor-in-chief Goose Fiction.

Since Rate My Professors (RMP) was established as the go-to site for researching and rating professors, it is important to be aware of its role in shaping our perceptions of them.

Accordingly website’s ‘About’ pageRMP allows students to “figure out who is a great professor and who you might want to avoid.” However, I believe this approach is the wrong way to approach the website. We shouldn’t let what I see as a flawed system have too much say in whether we ‘avoid’ certain professors.

In 5.0s and 1.0s, everything is fair game – or is it?

“I’m only giving a 5 to make up for my previous score of 1 – I believe the average of 3 is a much more accurate depiction of the professor’s (corrected) teaching standard,” reads an anonymous comment on the U of T professor’s RMP page. While I commend the student for adjusting his rating, I believe this comment demonstrates how polarizing the site’s numerical rating system can be.

Because the RMP relies on voluntary self-reporting, a 2021 study of the RMP’s data Texas State University He emphasizes that this type of reporting is often biased and open to misrepresentation. It tends to overrepresent individuals with strong views, especially negative ones. As a result, the discourse around the professor and the course becomes distorted by the impassioned voices that dominate the discourse.

This is consistent with my experiences on the site, where I frequently encounter ratings ranging from 1.0 to 5.0 for the same professor. I often wonder what makes a professor deserve such high scores.

In an interview with UniversityFourth-year economics student Dean Locke said RMP ratings were “pretty accurate overall.” However, he thinks that “using other students’ scores to determine how much I want a professor almost feels like treating them like an Amazon product.”

Locke makes a fair point. Numerical rating systems can be quite reductive, especially when evaluating a person. While they provide an at-a-glance indicator that is especially useful during the busy enrollment process—I’ve been there, too—I believe they ultimately discourage students from gaining a deeper understanding of a professor’s teaching style. It may prevent students from giving some professors a fair chance.

What’s in the review?

Fourth-grade math student Lucy Borbash is hesitant to use the site. As Borbash stated in his interview Varsity, They prefer to form their own opinions without being influenced by reviews that might deter them from trying something. When they visited a “beloved” professor’s page, they were surprised by his reviews, including those described as “unreasonable and oblivious to student complaints.”

Borbash said: “In my experience, he was actually very helpful… You could email him and ask him anything, and he would check in with us periodically… But if I had read in advance that he wasn’t supportive of his students, I might have done that. I didn’t ask for (help).” (I didn’t).

I agree with Borbash. If you have no choice but to take a course with a professor with a 1.0 rating, focusing solely on negative reviews may do more harm than good. A. Study published in 2007 Measurement and Evaluation in Higher Education Daily He found that exposure to negative RMP ratings led to negative expectations and thus negative experiences for students.

Also, a Experiment published in 2009 Journal of Computer Mediated Communication found that negative expectations created by simulated RMP feedback led to approximately 10 percent lower exam scores on course material compared to students who read simulated positive comments. The study also showed that the positive group scored higher than students who were not exposed to any RMP ratings. If the ratings are skewed and impacting academic performance, should we avoid the RMP website altogether?

While I don’t believe the site’s ratings and reviews should be the deciding factor in course selection, I do think RMP can be incredibly useful for gauging what to expect from a class. It is important to be mindful of how we use the site. We should put aside excessive comments and focus on information about how the lesson is taught. Additionally, it is important to distinguish between subjective and objective reviews.

Prioritizing comments like “This professor requires weekly exams” over comments like “This professor is a boring lecturer” will be more productive in the long run, both in terms of managing your expectations and setting yourself up for success in the classroom.

Athen Go is a fourth-year student studying architecture, English and visual studies. He is the editor-in-chief Goose Fiction.