Environmental Attitude Inventory
Outcome
Audience
Method
Citation
Milfont, T., & Duckitt, J. (2010). The Environmental Attitudes Inventory: A valid and reliable measure to assess the structure of environmental attitudes. Journal of Environmental Psychology, 30, 80–94. https://doi.org/10.1016/j.jenvp.2009.09.001
File
Background
The set of was developed from a review of research on environmental attitudes. The current inventory (all 120 items) was originally used with students at the University of Auckland who were enrolled in an introductory psychology course. Different versions of the have also been tested in Brazil, and within Australia, Europe, and North America.
Format
There are twelve scales, with 10 items each, and each scale measures one factor of environmental attitudes. Participants respond to each item on a 7-point scale with 1 being “strongly disagree” and 7 being “strongly agree”. There is also a condensed 24-item scale with the same 7-point scale.
Audience
Adults
When and how to use the tool
As attitudes are not likely to change easily, this tool is best used to understand the audience or to compare subgroups within a . If used for program , the program should be impactful and ideally, take place over time. Think deeply if the length of your program can change environmental attitudes. It may be helpful to skim the scales and the items and select the scale (one of 12 scales, with 10 items assessing one factor) that best meets your program objectives. Alternatively, scanning the 120 items may provide some ideas for developing your own items to assess attitudes. You won’t be able to say your tool has been tested and deemed appropriate, but it may serve your needs.
How to analyze
We recommend entering survey responses into a spreadsheet using a program such as Microsoft Excel. Create a spreadsheet with 10 columns for the 10 statements for each scale and a row for each individual. If you include more than one scale in your survey, include the appropriate number of columns based on the number of questions you used. Using a 1–7 point scale, enter the equivalent value (1 for “strongly disagree” to 7 for “strongly agree”). Assign each survey a , and enter each individual’s responses (ranging from 1 to 7) across the corresponding row. Enter a dot if the response was skipped.
Before you calculate an average score for each individual across the scale, you will need to determine whether any of the questions need to be reverse coded. In the original tool, questions that need to be reverse coded are noted with (R). In this tool, the authors have phrased select questions in a negative way. For example, the Enjoyment of Nature scale includes the following statement: “I find it very boring being out in wilderness areas.” In this case, a score of 1 (strongly disagree) would imply the respondent may enjoy being in nature. As such, in order to align the data from this statement with the other questions in the scale, the responses need to be reverse coded (a response of 1 becomes a 7, 2 becomes a 6, and so on.)
Once all of the appropriate questions have been reverse coded, create an average score for each individual by adding all of their responses and dividing by the number of questions answered. Do not include skipped questions for which you entered a dot. The average will be between 1 and 7.
What to do next
Once you’ve administered your survey and analyzed the data, consider the following suggestions about what to do next:
- What do the data tell you? Do people score higher on some subscales than others? Is one much lower than another? If these are attitudes you want to support and reinforce, this may suggest where your program could focus attention to enhance attitudes of the lower-scoring scale.
- You could compare populations to determine if members have a different outlook on the than the general population, or if one geographic area of your is different from another. This could also provide justification for program development, marketing, or funding proposals.
- Invite program staff or other partners to look over the data. Together you might also consider:
- What do these results tell us about our programming? Why do we think we got these results?
- What did we think we would see with respect to attitudes? And did these data support our goals?
- If our results did not support our goals, can we brainstorm on areas within the programming or delivery to influence attitudes? What changes should be made to programming, or how should new programs be designed?
- Who in our community should we reach out to for collaboratively discussing program design?
- Who or what organizations can we share our learning with?
How to see if this tool would work with your program
Select the scale that is appropriate to your program, and then review each statement to determine if it is relevant. Discuss with staff to decide if the results will be useful to you, and the scale with people who represent your audience. To pilot test, ask a small group of willing participants who are part of your target audience to talk to you as they complete the tool. What are they thinking when they read each item? What experiences come to mind when they respond? As long as this is what you expect and you will gain relevant information from your evaluation, you are on the right track! If the answers are different for each person, and they should be more similar given their experiences, you may need to look at other tools.
Tool Tips
- Consider carefully whether these scales will match the environmental outcome your program is influencing, and whether they will be appropriate for your community.