Work this week was focused on writing the Conclusion/Future Perspective of the thesis. Here’s what I wrote:
While much more testing needs to be done to identify and confidently determine what UI and design elements influence our risk-taking tendencies in digital environments, it’s clear that time, more than anything effects users’ ability to make predictable risk-taking decisions. Getting results that were so drastically different in the timed and untimed visual interface test indicates that time, more than a visual or sonic cue, has an effect on our decision-making process.
To a lesser extent, many of the tested users who were confronted with risky situations in the tested digital environments appeared to reject Prospect Theory when shown images of the potential consequences of their actions under a severe time limit. For many other people, being shown images will not affect their ultimate decision-making process. The wording, for most users, is still the most important part of making decisions about risky situations. However, there does seems to be quite a bit more discrepancy when users are given a glimpse, however brief, of the future. Color, size, and human-ness had very little effect on users who had plenty of time to explore their options. It was only this image of the consequences and the 5-second time limit that provided the most discrepancy from Prospect Theory.
The purpose of this thesis was to find if there was a way to exploit people’s natural risk-taking tendencies, and there does appear to be. However, considerably more study is needed to determine the exact circumstances and demographics most at-risk to be manipulated into being risk-averse or risk-prone in digital environments. Future studies might attempt to pinpoint exactly how much time, on average, it takes before users revert to making the Prospect Theory-predicted choice in an interface asking them to choose between a gamble and the real thing. Further studies might also attempt to determine the what images are more persuasive in given interfaces.
While showing images of the options’ outcomes and the time limit are clearly having an effect on users’ willingness to take risks, factors like age and human-ness seem not to have much effect at all. More studies are needed to be certain, but it seems that the tested users had no clear trends relating to age, or their interactions with the human-like sonic interface. This could be because the sonic interface was not realistic enough, or simply because at the moment, most users probably do not interact regularly with sonic interfaces. If they do, it is much less likely that they are participating in perceived “risky” activities (for example, agreement to Terms & Conditions, or online banking) using sonic interfaces.
One of the more disturbing revelations of this thesis was that people suddenly become so unpredictable in their risk-taking in digital environments, when a time limit is imposed. This discovery is of particular note since net neutrality is so hotly debated at the moment. Net neutrality is the idea that all internet services providers (or ISPs) should be legally obligated to treat all content the same. Content should not be made more or less accessible based on the amount of money a customer pays for it, or where that customer is located, for example.
If an ISP can speed up or slow down internet service when users are making risky decisions online, they have the potential to influence how we will make choices. Imagine being confronted with a series of websites that allow you to click through choices rapidly. Users may suddenly find themselves taking on risks that they would have considered more carefully had internet speed not been so variable. What if the speed at which their digital environment loads or updates begins to slow down for a page showing a particular choice. What if that particular choice is changing internet service providers? The user may re-think their gut reaction. They may make a choice that’s different that the one they originally made—they may second-guess themselves.
Besides the effect time has on our risk-taking proclivities, there is something to be said for using manipulative UI in digital environments (these UI elements, mentioned previous, are called “dark patterns”). Though this thesis is not about dark patterns, which are usually overt tricks involving complicated wording or confusing direction, a future perspective for this thesis is looking at making users more aware of manipulative UI elements.
Thinking about motifs that indicate something is bad or evil or ominous got me thinking about a popular forum on the website Reddit.com called /r/EvilBuildings. The community details blurb for the forum reads:
If the building could be the home to a super villain or evil corporation, it belongs here or really just any creepy looking building or maybe just anything evil or ok just buildings no no lets just stick with villainous/evil/creepy looking buildings.
In the forum, redditors post pictures of ominously lit structures that generally feature the architectural motifs we know from movies and television, and perhaps real life, indicate a building’s treacherousness. Unnecessarily sharp, jutting spires on ancient stone castles; enormous skyscrapers with just few windows, red lights leaking out of the panes; pyramids projecting odd symbols at their summits. We have a sense of what makes a building look “evil” because of the availability heuristic: we have seen representations of ominous buildings for a long time.
To date, we have no such record for Evil Interfaces. I determined that a useful tool for all users of digital environments would be a publication that takes to task “evil” interfaces, publishing information about digital environments that are purposely built to manipulate users into certain actions that do genuinely go against the user’s best interest. I created the website www.evilinterfaces.com to catalog interfaces that use manipulate UI to get people to interact with the digital environment in a way that goes against the users’ best interests.
Most relevant at this moment in time, and I think for the years to come, is that users have begun to realize just how serious a risk it is to share and store personal data on digital interfaces. Creating a vetting system for the UI users interact with when entering their data would require a complete overhaul of the internet and other digital environments as we know them, probably over a number of years. At the moment, users have only a company or organization’s word that their data will or will not be sold, manipulated, or used to sell and manipulate. At the very least, there needs to be an archive discussing manipulate UI, so that users can be better informed about how digital environments may be better informed about how interfaces are using them, not just how they use interfaces.
 /r/evilbuildings. https://www.reddit.com/r/evilbuildings/. Accessed 2 Apr. 2019.