ETS’s Approach to Measuring Digital Literacy

After months of research on digital literacy, most people would probably be exhausted and want a break. But when I saw that the Special Libraries Association had a program titled “iSkills Assessment of Digital Literacy” & Irvin R. Katz, the Senior Research Scientist at Educational Testing Service would be facilitating it, I was extremely excited. I thought that in addition to giving me an inside look at large-scale efforts were being done to assess digital literacy skills and an opportunity to forge relationships with people what had similar interests.

When I arrived, I discovered that due to personal situation Irv was not able to make it and sent Bill (William) Wynne to lead the workshop. A bit disappointed, I was still hopeful that I would still be a rewarding experience. As it turns out, I was right (don’t you love that feeling?). Unfortunately, I will not be able to write about it here. What I will be able to do is sprinkle commentary among a summary highlighting portions of the presentation.

Wynne opened up the discussion by talking about areas of life that technology has transformed. One example he provided was, that he was able to use his iPhone to search  for an alternative route to the convention center because the original route was congested.The possibilities presented by technology are limitless and so some people use it in meaningful ways while others use it as a toy.

ETS’s interest lied in assessing to what extent individuals were able to use it in meaningful ways. After several years of research and collaboration with stakeholders across the globe, this goal was finally realized. ETS along with these stakeholders defined digital literacy and developed a construct to measure it. I was very pleased to hear that and was looking forward to comparing the ETS’s model with one I have been toying with employing in my research.

Among all the things that influenced ETS’s framework, the U.S. Labor Secretary’s 1991 description of an effective worker which reads, an effective worker is able to

  • Identify find and select necessary information,
  • Assimilate and integrate information from multiple sources,
  • Represent convey and communicate information to others effectively,
  • Convert information from one form to another &
  • Prepare interpret and maintain quantitative and non-quantitative records and information including visual displays.

had the strongest influence. Subsequently, ETS developed a digital literacy outcomes assessment test which consisted of what Wynne referred to as  7 “meaningful” categories:

  1. Define
  2. Access
  3. Manage
  4. Integrate
  5. Evaluate
  6. Create
  7. Communicate

As he continued to talking about this, I was trying to figure out why his model looked so familiar. The it hit me! It resembled many of the frameworks I came across while completing my teaching practicum in Karen Fisher’s Information Behavior course. As I sat there I wondered if I could neatly lay ETS’s framework over the one developed by Mike Eisenberg & his colleagues and so I opened a browser and pulled up the Big6 website (Off-topic: To my surprise Mike’s website did NOT come up on the first page of Google results).

The Big6 model has the following steps:

  1. Task Definition
  2. Information Seeking Strategies
  3. Location & Access
  4. Use of Information
  5. Synthesis
  6. Evaluation

I was not surprised to discover that ETS’s & Big6’s model had a few differences. I believe the difference had to do more with the use of verbiage than anything else. However, ETS did contain a category called manage which as far I could tell was not represented in the Big6. ETS defined manage as “Using ICT tools to apply an existing organizational or classification scheme for information” (ETS, 2005). This definition did not do anything to help me understand why it is “meaningful.”

When I finally focused back in, Wynne was talking about how a 2003 pilot test which revealed that test takers assumed their the iSkills test focused on technology skills (IC3 Assessment ICDL, etc)–mastery of applications, was being assessed when it was much more. According to Wynne the test dealt with the application of skills–could a student use spreadsheet to solve a problem, rearrange PPT to tell a story, etc. Essentially, it was possessed both technical literacy and REAL information literacy skills.

Wynne spent a considerable amount of time talking about how ETS worked with members of the community to ensure the test was reliable and have face validity. During this time he described multiple choice tests as being a non-authentic assessment (because it fails to measure certain things) & that information literacy tests that employ it are not REALLY measuring if a person can do these tasks in a digital environment. Although the use of the term “non-authentic” is new to me, I am very familiar with this critique of digital literacy studies. Wynne then goes on to discuss what makes ETS’s digital literacy test authentic and how ETS ensured the validity and reliability of the test. Wynne then spends about twenty (20) minutes reviewing questions that are on the exam. Then he presented findings from research conducted using iSkills.

The research presented was conducted using 12,000 students in 76 institutions (ranging from high school to college). One of the major findings of their work was that only 27% of individuals that participated met digital literacy expectations. If I had not been reading literature in that area, that number may have been surprising. What was surprising was that participants accurately assessed their own skills. The other surprising thing was that “doing a lot of digital literacy does not necessary lead to good digital literacy skills.”  Unfortunately, he did not unpack this statement and I forgot to ask about it during the question answer portion of the session. Two other interesting findings were that digital literacy is more strongly associated “with verbal skills than math skills” and “with business writing than humanities writing. The potential implication of the last two can be great.

Bill Wynne closes by reminding the audience that although digital literacy has been called several things, it is essentially a skillful use of information technology. As such, appropriate assessment tools are going to use a performance based task to prove that the student or employee has mastered the skills.

If you were able to get anything out of this recollection, then thank Bill Wynne.

2 thoughts on “ETS’s Approach to Measuring Digital Literacy”

  1. Lassana, thanks for sharing this. I am very interested in learning more about how ETS is assessing ICT literacy. I skimmed the article you linked to… do you know if their assessment is something other researchers can use? I have used a technology self-efficacy scale, but it relies on self-reporting, and students had very positive beliefs about their ability to use ICT… it would be interesting to compare those scores with performance-based assessment. 🙂

    And… is their data set by any chance available for other researchers to use? (I can imagine the answer to this is no, but I remain hopeful.)

    1. Hi Abby. I am delighted you found this post to be useful! Funny you might ask. I asked the ETS representative and he seemed interested and ready to share the research instrument with other people. Would you like me to do a virtual introduction? The measurement tools used by Paul DiMaggio and Eszter Hargittai & Jan van Dijk and Alexander van Deursen immediately come to mind. You should be able to find their instruments within their articles. Hargittai ran an experiment doing EXACTLY that, comparing what students “thought” they could do with what they were “really” able to do.

      Data, I’m sure you can probably get your hands on data. It’s making sense of the data that most people struggle with.

Comments are closed.