Making Information Make Sense





< Back to List of Posts

InfoMatters

Category: Public Communication / Topics: Knowledge Media Opinion research

Driven by Polls

Reporting or making news?

by Stu Johnson

Posted: April 98, 2015

Are opinion polls helping or hindering public discourse?

How many times do you hear the expression “according to the latest poll”?  News outlets use it incessantly, with their “overnight tracking” polls.  Politicians add “the American people say” to demonstrate consensus for their own position or undermine the opposition. Proponents of policy formation and social issues also seem quick to bolster their cases (for or against something) based on public opinion polls.

We are driven by polls.  They are everywhere, but do they really contribute to the kind of knowledge and informed discussion that is necessary for a democracy to function well?  Let’s pause and take a deeper look. First, we'll look at some of the basics about survey research. Then, we'll turn to issues about question construction and whether the responses have any meaningful value.

Getting the numbers

Survey research is based on the principle of using a sample of a small number of people in order to make projections to a larger population. Here, “population” means a subset of the “universe” of everyone within the survey area.  In political surveys, commonly used populations are all adults, registered voters, those most likely to vote, among other subsets.  Normally, but not always, the survey’s population is stated with the poll results. This is helpful, since knowing the precise population can make a difference in how to interpret the results.

In order to be accurate, the sample must be random—that does not mean haphazard, it means that every person in the target population has an equal chance of being selected for inclusion in the sample. But that is getting harder to do.

For years, the most common ways of conducting surveys were to mail questionnaires or call people on the phone number associated with an address. Adding technological changes to an increasingly mobile society presents real challenges for researchers.  Answering machines, cell phones, e-mail and reliance on social media, all make it more difficult for researches to produce an acceptable sample.  

Allow a brief detour to mention two related issues:

  • It should be obvious that online polls should not be lumped in with statically valid opinion surveys.  They can be interesting, but they represent only the opinions of the visitors to that website who choose to respond. Self-selection is not random.
  • Organizations like SurveyMonkey have assembled large groups of people willing to take part in surveys—in reality you end up getting a sample of a sample, which is fine for many situations if you understand it, but it is not ideal for surveys that require a genuine random sample.

All good surveys should be accompanied by a statement of the margin of error, expressed as plus or minus percentage.  Another measure is the confidence level of the survey, which is typically 95%. That is like saying “we can be 95% certain that the results will be within 4% plus or minus of how the entire population would respond.”  It may be tempting to suggest a better margin of error, but doing that comes at the cost of lower confidence.

Getting further into the weeds, the term reliability addresses consistency of results.  High reliability means the chances are very good that if additional samples were drawn they would produce results within the same error of margin.

How many people do you need in the sample?  Major comprehensive national surveys typically use about 2,000 respondents.  In order to produce quick results and keep costs down, the overnight snapshot surveys rely on far fewer respondents, often in the low hundreds.

So, if hundreds will work, why not do that all the time?  For an example, let’s take the number of adults 18+ in the United States (estimated at 242.5-million by the U.S. Census Bureau for 2013).  Using the Margin of Error Calculator from the American Research Group, Inc., we get the following results

Population

Sample Size

Margin of Error

242.5-million

200

±6.93%

400

±4.9%

1,000

±3.1%

2,000

±2,19%

3,000

±1.79%

4,000

±1.55%

5,000

±1.39%

These numbers are based on a theoretical margin of error achieved “95% of the time, for questions where opinion is evenly split.”  A small sample size may be quite acceptable when doing one question on one limited dimension (political party or gender). However, the margin of error will increase as the survey becomes more detailed, the responses more disparate, and when there is a need to drill down through multiple layers of analsysis. In political surveys, this would typically be party affiliation, age, sex, education, income, region and similar demographic factors.

Notice that increasing the sample size initially has a big impact on reducing the margin of error, but then it begins to slow down. In practical terms, survey research companies need to find the optimal sample size based on the cost of producing an acceptable margin of error.

It is important to be careful when interpreting survey result numbers, especially those close to the margin of error.  For example, if the results show 40% on one dimension and 46% on another, the actual split with a 4%  margin of error could be anywhere from 36-50 to 44-42. While such extremes (opposite ends of the margin of error) are highly unlikely, it serves as a reminder that you are dealing with an estimate of the real population and not a fact that can be conveyed with decimal point precision.  

When a sample deviates from the population, the sample can be weighted to bring the results into closer alignment with the distribution of the population on the dimensions being measured (party affiliation, age, sex, education, region, etc.). This has been a problem with some snapshot surveys that can heavily over-represent one dimension (usually party affiliation) to the point where weighting cannot correct the inherent distortion. If you have doubts, look for an explanation of sample methodology and a breakdown of the sample.

The ability to drill down into the data is affected by sample size. For instance, take a question asking for a selection of options on a policy issue. You want to know the responses by party preference. No problem.  But, ask for a breakdown by age in the normal ten-year brackets and the smallest sample size may begin to stumble.  Some cells may be empty or the numbers not statistically significant. Now ask for the results by  income by region by party and even a good size sample may produce some empty or insignificant cells.  

Drilling down, or “mining” the data is an effective way to reveal patterns that might be hiding in the details. However, there is a limit to how far this can go. I have done analysis where the client had to be warned about drilling down further than the data could reasonably support.

Question neutrality or bias

A good sample is only the beginning. In order to accurately determine the opinion of the population on any given subject, the questions must be presented in a way that does not unduly influence (bias) the results.  Unfortunately, too often—and  particularly in the snapshot surveys used by various media—questions are either not crafted well, leading to an unintended bias, or they are crafted in a way that forces the respondent to provide answers that serve the survey publisher’s agenda. 

Combined with problems with the sample, question bias helps explain why results on different surveys can be contradictory.  When that occurs, you should dig deeper to check the validity of the sample and  the wording of questions—and the responses available.  Sometimes the question itself may be ok, but the responses may be limited or even distorted, leading to a false impression. In other cases, the bias is much more blatant, some examples of which I hope to include in future postings.

Longevity

Most of the polls I’m talking about here have very short shelf lives.  Often in survey research, however, you want to track changes over time.  This is a fundamental part of the Religion in America project highlighted on this website. Some of the charts go back to the 1960s and a few even further than that. To produce meaningful charts over that period of time requires that the same or equivalent question was asked to a similar sample each time the survey was taken.  Minor changes in terminology or reclassification of some data may be possible to manage, but wholesale changes make it very difficult or impossible.

When you see surveys that track over time (approval ratings, candidate preference, etc.), they are only valid if the same methodolgy was used each time—similar samples with the same quesetion wording.

How valuable are the opinions reported?

Now, we come to one of my biggest problems with much of opinion polling today, especially those ever-present snapshot polls. When a question is posed about a policy question, I want to scream at the TV and say, “how do they know anything about that?”  There is an implication that the question has been asked of knowledgeable people who have formed their opinions after careful consideration of the facts. 

Too often, it is obvious that what is being polled is the feelings of people toward a subject. And where do those feelings come from? Social media, snapshot polls, sound bites, talk radio, late night TV shows, and on and on. It’s nothing new, however. In 1947, social science researchers Hyman and Sheatsley wrote about “chronic know-nothings,” and surveys since then have indicated little change—fodder for another blog posting!  Sadly, the know-nothings have only found more sources of opinion and misinformation, ignoring the facts whose availability has also exploded.

Encouraging informed opinion

There is certainly a place for the use of opinion polling by news media, but I object to its overuse and misuse, particularly asking opinions on subjects which only a few respondents are likely to have an informed opinion, or even worse, asking questions that are little more than a charade for advancing an agenda.  In a similar way, when public officials base their arguments on this type of polling, they abdicate their responsibility to help promote the development of an informed electorate.

Instead of soliciting tweets to some hashtag about your feelings on a subject, would we not be better served with more use of "to find out more" links . . . and then engage in dialogue?



Search all articles by Stu Johnson

Stu Johnson is owner of Stuart Johnson & Associates, a communications consultancy in Wheaton, Illinois focused on "making information make sense."

E-mail the author (moc.setaicossajs@uts*)

* For web-based email, you may need to copy and paste the address yourself.


Posted: April 98, 2015   Accessed 3,405 times

Go to the list of most recent InfoMatters Blogs
Search InfoMatters (You can expand the search to the entire site)

`
Think about it...

The idea is to try to give all the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.

Richard P. Feynman

From our partner websites

< Back to List of Posts

InfoMatters

Category: Public Communication / Topics: Knowledge Media Opinion research

Driven by Polls

Reporting or making news?

by Stu Johnson

Posted: April 98, 2015

Are opinion polls helping or hindering public discourse?

How many times do you hear the expression “according to the latest poll”?  News outlets use it incessantly, with their “overnight tracking” polls.  Politicians add “the American people say” to demonstrate consensus for their own position or undermine the opposition. Proponents of policy formation and social issues also seem quick to bolster their cases (for or against something) based on public opinion polls.

We are driven by polls.  They are everywhere, but do they really contribute to the kind of knowledge and informed discussion that is necessary for a democracy to function well?  Let’s pause and take a deeper look. First, we'll look at some of the basics about survey research. Then, we'll turn to issues about question construction and whether the responses have any meaningful value.

Getting the numbers

Survey research is based on the principle of using a sample of a small number of people in order to make projections to a larger population. Here, “population” means a subset of the “universe” of everyone within the survey area.  In political surveys, commonly used populations are all adults, registered voters, those most likely to vote, among other subsets.  Normally, but not always, the survey’s population is stated with the poll results. This is helpful, since knowing the precise population can make a difference in how to interpret the results.

In order to be accurate, the sample must be random—that does not mean haphazard, it means that every person in the target population has an equal chance of being selected for inclusion in the sample. But that is getting harder to do.

For years, the most common ways of conducting surveys were to mail questionnaires or call people on the phone number associated with an address. Adding technological changes to an increasingly mobile society presents real challenges for researchers.  Answering machines, cell phones, e-mail and reliance on social media, all make it more difficult for researches to produce an acceptable sample.  

Allow a brief detour to mention two related issues:

  • It should be obvious that online polls should not be lumped in with statically valid opinion surveys.  They can be interesting, but they represent only the opinions of the visitors to that website who choose to respond. Self-selection is not random.
  • Organizations like SurveyMonkey have assembled large groups of people willing to take part in surveys—in reality you end up getting a sample of a sample, which is fine for many situations if you understand it, but it is not ideal for surveys that require a genuine random sample.

All good surveys should be accompanied by a statement of the margin of error, expressed as plus or minus percentage.  Another measure is the confidence level of the survey, which is typically 95%. That is like saying “we can be 95% certain that the results will be within 4% plus or minus of how the entire population would respond.”  It may be tempting to suggest a better margin of error, but doing that comes at the cost of lower confidence.

Getting further into the weeds, the term reliability addresses consistency of results.  High reliability means the chances are very good that if additional samples were drawn they would produce results within the same error of margin.

How many people do you need in the sample?  Major comprehensive national surveys typically use about 2,000 respondents.  In order to produce quick results and keep costs down, the overnight snapshot surveys rely on far fewer respondents, often in the low hundreds.

So, if hundreds will work, why not do that all the time?  For an example, let’s take the number of adults 18+ in the United States (estimated at 242.5-million by the U.S. Census Bureau for 2013).  Using the Margin of Error Calculator from the American Research Group, Inc., we get the following results

Population

Sample Size

Margin of Error

242.5-million

200

±6.93%

400

±4.9%

1,000

±3.1%

2,000

±2,19%

3,000

±1.79%

4,000

±1.55%

5,000

±1.39%

These numbers are based on a theoretical margin of error achieved “95% of the time, for questions where opinion is evenly split.”  A small sample size may be quite acceptable when doing one question on one limited dimension (political party or gender). However, the margin of error will increase as the survey becomes more detailed, the responses more disparate, and when there is a need to drill down through multiple layers of analsysis. In political surveys, this would typically be party affiliation, age, sex, education, income, region and similar demographic factors.

Notice that increasing the sample size initially has a big impact on reducing the margin of error, but then it begins to slow down. In practical terms, survey research companies need to find the optimal sample size based on the cost of producing an acceptable margin of error.

It is important to be careful when interpreting survey result numbers, especially those close to the margin of error.  For example, if the results show 40% on one dimension and 46% on another, the actual split with a 4%  margin of error could be anywhere from 36-50 to 44-42. While such extremes (opposite ends of the margin of error) are highly unlikely, it serves as a reminder that you are dealing with an estimate of the real population and not a fact that can be conveyed with decimal point precision.  

When a sample deviates from the population, the sample can be weighted to bring the results into closer alignment with the distribution of the population on the dimensions being measured (party affiliation, age, sex, education, region, etc.). This has been a problem with some snapshot surveys that can heavily over-represent one dimension (usually party affiliation) to the point where weighting cannot correct the inherent distortion. If you have doubts, look for an explanation of sample methodology and a breakdown of the sample.

The ability to drill down into the data is affected by sample size. For instance, take a question asking for a selection of options on a policy issue. You want to know the responses by party preference. No problem.  But, ask for a breakdown by age in the normal ten-year brackets and the smallest sample size may begin to stumble.  Some cells may be empty or the numbers not statistically significant. Now ask for the results by  income by region by party and even a good size sample may produce some empty or insignificant cells.  

Drilling down, or “mining” the data is an effective way to reveal patterns that might be hiding in the details. However, there is a limit to how far this can go. I have done analysis where the client had to be warned about drilling down further than the data could reasonably support.

Question neutrality or bias

A good sample is only the beginning. In order to accurately determine the opinion of the population on any given subject, the questions must be presented in a way that does not unduly influence (bias) the results.  Unfortunately, too often—and  particularly in the snapshot surveys used by various media—questions are either not crafted well, leading to an unintended bias, or they are crafted in a way that forces the respondent to provide answers that serve the survey publisher’s agenda. 

Combined with problems with the sample, question bias helps explain why results on different surveys can be contradictory.  When that occurs, you should dig deeper to check the validity of the sample and  the wording of questions—and the responses available.  Sometimes the question itself may be ok, but the responses may be limited or even distorted, leading to a false impression. In other cases, the bias is much more blatant, some examples of which I hope to include in future postings.

Longevity

Most of the polls I’m talking about here have very short shelf lives.  Often in survey research, however, you want to track changes over time.  This is a fundamental part of the Religion in America project highlighted on this website. Some of the charts go back to the 1960s and a few even further than that. To produce meaningful charts over that period of time requires that the same or equivalent question was asked to a similar sample each time the survey was taken.  Minor changes in terminology or reclassification of some data may be possible to manage, but wholesale changes make it very difficult or impossible.

When you see surveys that track over time (approval ratings, candidate preference, etc.), they are only valid if the same methodolgy was used each time—similar samples with the same quesetion wording.

How valuable are the opinions reported?

Now, we come to one of my biggest problems with much of opinion polling today, especially those ever-present snapshot polls. When a question is posed about a policy question, I want to scream at the TV and say, “how do they know anything about that?”  There is an implication that the question has been asked of knowledgeable people who have formed their opinions after careful consideration of the facts. 

Too often, it is obvious that what is being polled is the feelings of people toward a subject. And where do those feelings come from? Social media, snapshot polls, sound bites, talk radio, late night TV shows, and on and on. It’s nothing new, however. In 1947, social science researchers Hyman and Sheatsley wrote about “chronic know-nothings,” and surveys since then have indicated little change—fodder for another blog posting!  Sadly, the know-nothings have only found more sources of opinion and misinformation, ignoring the facts whose availability has also exploded.

Encouraging informed opinion

There is certainly a place for the use of opinion polling by news media, but I object to its overuse and misuse, particularly asking opinions on subjects which only a few respondents are likely to have an informed opinion, or even worse, asking questions that are little more than a charade for advancing an agenda.  In a similar way, when public officials base their arguments on this type of polling, they abdicate their responsibility to help promote the development of an informed electorate.

Instead of soliciting tweets to some hashtag about your feelings on a subject, would we not be better served with more use of "to find out more" links . . . and then engage in dialogue?



Search all articles by Stu Johnson

Stu Johnson is owner of Stuart Johnson & Associates, a communications consultancy in Wheaton, Illinois focused on "making information make sense."

E-mail the author (moc.setaicossajs@uts*)

* For web-based email, you may need to copy and paste the address yourself.


Posted: April 98, 2015   Accessed 3,406 times

Go to the list of most recent InfoMatters Blogs
Search InfoMatters (You can expand the search to the entire site)

`
< Back to List of Posts

InfoMatters

Category: Public Communication / Topics: Knowledge Media Opinion research

Driven by Polls

Reporting or making news?

by Stu Johnson

Posted: April 98, 2015

Are opinion polls helping or hindering public discourse?

How many times do you hear the expression “according to the latest poll”?  News outlets use it incessantly, with their “overnight tracking” polls.  Politicians add “the American people say” to demonstrate consensus for their own position or undermine the opposition. Proponents of policy formation and social issues also seem quick to bolster their cases (for or against something) based on public opinion polls.

We are driven by polls.  They are everywhere, but do they really contribute to the kind of knowledge and informed discussion that is necessary for a democracy to function well?  Let’s pause and take a deeper look. First, we'll look at some of the basics about survey research. Then, we'll turn to issues about question construction and whether the responses have any meaningful value.

Getting the numbers

Survey research is based on the principle of using a sample of a small number of people in order to make projections to a larger population. Here, “population” means a subset of the “universe” of everyone within the survey area.  In political surveys, commonly used populations are all adults, registered voters, those most likely to vote, among other subsets.  Normally, but not always, the survey’s population is stated with the poll results. This is helpful, since knowing the precise population can make a difference in how to interpret the results.

In order to be accurate, the sample must be random—that does not mean haphazard, it means that every person in the target population has an equal chance of being selected for inclusion in the sample. But that is getting harder to do.

For years, the most common ways of conducting surveys were to mail questionnaires or call people on the phone number associated with an address. Adding technological changes to an increasingly mobile society presents real challenges for researchers.  Answering machines, cell phones, e-mail and reliance on social media, all make it more difficult for researches to produce an acceptable sample.  

Allow a brief detour to mention two related issues:

  • It should be obvious that online polls should not be lumped in with statically valid opinion surveys.  They can be interesting, but they represent only the opinions of the visitors to that website who choose to respond. Self-selection is not random.
  • Organizations like SurveyMonkey have assembled large groups of people willing to take part in surveys—in reality you end up getting a sample of a sample, which is fine for many situations if you understand it, but it is not ideal for surveys that require a genuine random sample.

All good surveys should be accompanied by a statement of the margin of error, expressed as plus or minus percentage.  Another measure is the confidence level of the survey, which is typically 95%. That is like saying “we can be 95% certain that the results will be within 4% plus or minus of how the entire population would respond.”  It may be tempting to suggest a better margin of error, but doing that comes at the cost of lower confidence.

Getting further into the weeds, the term reliability addresses consistency of results.  High reliability means the chances are very good that if additional samples were drawn they would produce results within the same error of margin.

How many people do you need in the sample?  Major comprehensive national surveys typically use about 2,000 respondents.  In order to produce quick results and keep costs down, the overnight snapshot surveys rely on far fewer respondents, often in the low hundreds.

So, if hundreds will work, why not do that all the time?  For an example, let’s take the number of adults 18+ in the United States (estimated at 242.5-million by the U.S. Census Bureau for 2013).  Using the Margin of Error Calculator from the American Research Group, Inc., we get the following results

Population

Sample Size

Margin of Error

242.5-million

200

±6.93%

400

±4.9%

1,000

±3.1%

2,000

±2,19%

3,000

±1.79%

4,000

±1.55%

5,000

±1.39%

These numbers are based on a theoretical margin of error achieved “95% of the time, for questions where opinion is evenly split.”  A small sample size may be quite acceptable when doing one question on one limited dimension (political party or gender). However, the margin of error will increase as the survey becomes more detailed, the responses more disparate, and when there is a need to drill down through multiple layers of analsysis. In political surveys, this would typically be party affiliation, age, sex, education, income, region and similar demographic factors.

Notice that increasing the sample size initially has a big impact on reducing the margin of error, but then it begins to slow down. In practical terms, survey research companies need to find the optimal sample size based on the cost of producing an acceptable margin of error.

It is important to be careful when interpreting survey result numbers, especially those close to the margin of error.  For example, if the results show 40% on one dimension and 46% on another, the actual split with a 4%  margin of error could be anywhere from 36-50 to 44-42. While such extremes (opposite ends of the margin of error) are highly unlikely, it serves as a reminder that you are dealing with an estimate of the real population and not a fact that can be conveyed with decimal point precision.  

When a sample deviates from the population, the sample can be weighted to bring the results into closer alignment with the distribution of the population on the dimensions being measured (party affiliation, age, sex, education, region, etc.). This has been a problem with some snapshot surveys that can heavily over-represent one dimension (usually party affiliation) to the point where weighting cannot correct the inherent distortion. If you have doubts, look for an explanation of sample methodology and a breakdown of the sample.

The ability to drill down into the data is affected by sample size. For instance, take a question asking for a selection of options on a policy issue. You want to know the responses by party preference. No problem.  But, ask for a breakdown by age in the normal ten-year brackets and the smallest sample size may begin to stumble.  Some cells may be empty or the numbers not statistically significant. Now ask for the results by  income by region by party and even a good size sample may produce some empty or insignificant cells.  

Drilling down, or “mining” the data is an effective way to reveal patterns that might be hiding in the details. However, there is a limit to how far this can go. I have done analysis where the client had to be warned about drilling down further than the data could reasonably support.

Question neutrality or bias

A good sample is only the beginning. In order to accurately determine the opinion of the population on any given subject, the questions must be presented in a way that does not unduly influence (bias) the results.  Unfortunately, too often—and  particularly in the snapshot surveys used by various media—questions are either not crafted well, leading to an unintended bias, or they are crafted in a way that forces the respondent to provide answers that serve the survey publisher’s agenda. 

Combined with problems with the sample, question bias helps explain why results on different surveys can be contradictory.  When that occurs, you should dig deeper to check the validity of the sample and  the wording of questions—and the responses available.  Sometimes the question itself may be ok, but the responses may be limited or even distorted, leading to a false impression. In other cases, the bias is much more blatant, some examples of which I hope to include in future postings.

Longevity

Most of the polls I’m talking about here have very short shelf lives.  Often in survey research, however, you want to track changes over time.  This is a fundamental part of the Religion in America project highlighted on this website. Some of the charts go back to the 1960s and a few even further than that. To produce meaningful charts over that period of time requires that the same or equivalent question was asked to a similar sample each time the survey was taken.  Minor changes in terminology or reclassification of some data may be possible to manage, but wholesale changes make it very difficult or impossible.

When you see surveys that track over time (approval ratings, candidate preference, etc.), they are only valid if the same methodolgy was used each time—similar samples with the same quesetion wording.

How valuable are the opinions reported?

Now, we come to one of my biggest problems with much of opinion polling today, especially those ever-present snapshot polls. When a question is posed about a policy question, I want to scream at the TV and say, “how do they know anything about that?”  There is an implication that the question has been asked of knowledgeable people who have formed their opinions after careful consideration of the facts. 

Too often, it is obvious that what is being polled is the feelings of people toward a subject. And where do those feelings come from? Social media, snapshot polls, sound bites, talk radio, late night TV shows, and on and on. It’s nothing new, however. In 1947, social science researchers Hyman and Sheatsley wrote about “chronic know-nothings,” and surveys since then have indicated little change—fodder for another blog posting!  Sadly, the know-nothings have only found more sources of opinion and misinformation, ignoring the facts whose availability has also exploded.

Encouraging informed opinion

There is certainly a place for the use of opinion polling by news media, but I object to its overuse and misuse, particularly asking opinions on subjects which only a few respondents are likely to have an informed opinion, or even worse, asking questions that are little more than a charade for advancing an agenda.  In a similar way, when public officials base their arguments on this type of polling, they abdicate their responsibility to help promote the development of an informed electorate.

Instead of soliciting tweets to some hashtag about your feelings on a subject, would we not be better served with more use of "to find out more" links . . . and then engage in dialogue?



Search all articles by Stu Johnson

Stu Johnson is owner of Stuart Johnson & Associates, a communications consultancy in Wheaton, Illinois focused on "making information make sense."

E-mail the author (moc.setaicossajs@uts*)

* For web-based email, you may need to copy and paste the address yourself.


Posted: April 98, 2015   Accessed 3,407 times

Go to the list of most recent InfoMatters Blogs
Search InfoMatters (You can expand the search to the entire site)

`
Home | About | Religion in America | Resouce Center | Contact Us
©2024 Stuart Johnson & Associates
Home | About | Religion in America
Resouce Center |  Contact Us
©2024 Stuart Johnson & Associates