
Ellen Helsper
LSE

Luc Schneider
LSE
Until recently very few tools were available to policy makers, practitioners, and researchers to assess the broad spectrum of digital skills amongst young people. Researchers working on the youth skills project have filled this gap by developing a unique 31-item survey instrument – the youth Digital Skills Index (yDSI). The yDSI measures a combination of skills and knowledge items across four distinct domains, comprising functional and critical aspects of digital skills (see Figure 1).

Figure 1: The four yDSI digital skills domains incorporating functional and critical aspects
Four digital skills domains emerged from a review of the existing literature: technical and operational (TO); information navigation and processing (INP); communication and interaction (CI); and content creation and production (CCP) skills).
The instrument is one of its kind, not only in its theoretical grounding but also in its extensive cross-cultural validation through 80 cognitive interviews, 2438 pilot surveys and 143 performance tests with young people in nine European countries.
Gaps and issues in digital skills instrument development
A deep dive into the literature followed bycognitive interviews with young people revealed important gaps. Measurement of CI and CCP skills was particularly lacking. Many existing items did not tackle critical CI skills. For CCP skills, existing research and measurement focussed mostly on the creation of digital content, ignoring skills related to the distribution of everyday content as well as knowledge related to regulations of online content. There was much more consistency in the design of TO and INP items across different studies and different countries. The yDSI thus incorporates significant improvements of existing items as well as newly designed ones.
Performance tests, designed to validate the skills items in the yDSI, brought to light further complications for doing cross-national survey research. They revealed large disparities in the information available in the languages of the different countries included in the study, even for things as seemingly straightforward as dinosaurs. This complicates the design of INP skills tasks which use search engines. We found considerable issues and age differences in youth’s ability to complete tasks that were relatively basic, such as restricting search of news to a particular year or detecting whether something was advertising or fake news.
Designing the yDSI
Based on best practice guidelines established for the ySKILLS project, an initial list of 136 quality skills items was identified. This list was further whittled down to the best 42 items for validation. After cognitive interviews, pilot surveys, and performance tests, this list was further reduced to 25 items (see Figure 2). One item related to programming skills was assigned its own domain. While programming did not statistically fit within the four domains, it is of clear importance to digital skills policy and training and is thus measured separately.

Figure 2: yDSI digital skills items measuring different domains of digital skills
On top of this, a set of 14 items was created to test participants’ critical knowledge of digital tools and media. These items were created as it became apparent that asking participants to assess certain aspects of their skills was impossible without generating the knowledge they were enquiring about. The yDSI digital knowledge items are formulated as true or false statements and capture varying levels of difficulty. This novel tool that can be used in survey research is able to differentiate between highly and less knowledgeable participants. The analyses of these knowledge items showed that around 45% of young people got half or more of the questions wrong, with little distinction between the four domains (see Figure 3).

Figure 3: Distribution of correct answers on digital knowledge indicators
The final short version of this instrument consists of the six items that were best at distinguishing different skill levels.
“Sins” and “virtues” in digital skill surveys
Besides developing a new and unique measurement instrument, the report published by the ySKILLS project, highlights seven “sins” that should be avoided and proposes seven “best practice” guidelines for the design of digital skills survey items. Most existing studies commit at least one of the following “sins”, making many of their digital skills instruments less adequate: (1) general bad survey item design, (2) solely PC-based content, (3) overly vague or general phrasing, and measured (4) outcomes, (5) use, (6) attitudes, or (7) confidence instead of skills.
To tackle this issue, the following seven best practice guidelines should be used to guide future item and scale creation by researchers and others with a stake in assessing digital skills amongst the general population. Surveys should ask about whether participants possess a certain digital skill, rather than about (1) use or (2) how expert they are. Surveys should (3) avoid device-, app- or activity-specific items; (4) be designed to capture (functional) digital skills and (critical) digital knowledge; (5) include at least half of statements that are untrue; (6) be phrased in such a way that participants avoid evaluating their skills in comparison to others; and (7) be scale-based including an option suggesting that a lack of skill or understanding is acceptable.
Note on methodology
The instrument was designed by researchers at the London School of Economics and Political Science and Twente University. Validation took place in Belgium, Estonia, Finland, Germany, Italy, the Netherlands, Poland, Portugal, and the UK. Cognitive interviews were conducted with 80 children from 12 to 17. The survey instrument was tested with a balanced sample of at least 300 young people aged 18 to 25 per country (2438 in total), and performance tests were conducted in four countries with 143 children aged 12 to 17.
The cognitive interviews were used to establish content validity, the pilot surveys used factor analysis and equivalence testing to establish construct, convergent and discriminant validity and the performance tests were designed to test criterion validity and used to adjust the yDSI items.
The instrument will be used in the ySKILLS longitudinal cross-country panel survey to better understand the relationship between digital skills and well-being among European teenagers.
More information
The final report provides a detailed overview of the final short version of the instrument, as well as an extended version that includes more items for practitioners that are interested in measuring digital skills with even greater precision. The data analysis and item selection process are discussed at length to enable educators, policymakers and researchers alike to apply the instrument in different contexts, as they see fit.
Comments are closed