Gamification, Sensors and Big Data

diy_gamification

What is Gamification?

“Gamification is the use of game design elements in non-game context,” as defined by Deterding et al. (2011,pp. 9-15). Huotari and Hamari (2012,pp. 17-22) define gamification as “a process of enhancing a service with affordances for gameful experiences in order to support user’s overall value creation”. For example, every time you buy a starbucks coffee and collected points toward a free coffee, you’re being gamified.

f21

Who are Gamers? What are they?

There are several findings are highlighted by Moss (2015) at the Rocks Digital Conference: 42% of Americans play video games at least 3 hours per day; The average gamer is 35; Nearly 44% gamers are female; There are 22 billion dollars are spent video games and 960 million games are on smart phone.

Bartle (2003) published Designing Virtual Worlds in which the differentiated four game users (better known as Richard Bartle player types) who are only tangentially related to the avatars they may choose in playing video games.
Richard Bartle player type’s gamification taxonomy can be shown as below figure:

f13

In Gamification it is critical to monitor the performance of your metrics closely to insure your business getting the best results possible and allows different players are happy.

What Gamification can do?

Gamification is using game mechanics to ensure users return visits, more engagement, increased sales, increased learning and participation.

This means the business are trying to get people motivated and engaged to take action in order to achieve a goal or bigger purpose. Gamification helped the business is defining a framework in-which to how to win or lost. Gamification is recognizing that games are actually very powerful and sophisticated layers of mechanics to motivate people to achieve. The game mechanics can be applied to anything where human behavior needs to be optimized.

f14

Different types of Gamification Mechanics

There are over 52 different types of game mechanics. However, the seven most commonly used mechanics in gamification are:

f15

Sensors

Sensors includes accelerometers, location detection, wireless connectivity and cameras, offer another big step towards closing the feedback loop in personalized data. Many devices and services help with tracking physical activity, caloric intake, sleep quality, posture, and other factors involved in personal well-being.

In 2006, Nike developed an app called Nike+ which tracks running distance, speed, and time. It stores the data every time you run so you can monitor your progress. You can compete with friends and other Nike+ users to try and run the furthest distance. The purpose is to get people active and running, and they use gamification to do so. There are leaderboards for different time frames (weekly, monthly, and so on). If you are one of the top runners, your profile is displayed on the leaderboard for bragging rights. Nike+ can export the data to other apps (such as food tracking apps for calories burned) and can also publish your information to your social media accounts. Nike+ has become a community of people who all enjoy running and want to push each other to keep improving and live healthy lifestyles.

In order to gain a competitive advantage, Nike implemented gamification to their marketing mix and was able to secure a controlling portion of the running shoe market. In 2006, Nike controlled 47% of the running shoe market and launched Nike+ in the middle of the year. In 2007, they controlled 57% of the market, and 61% in 2009. They’ve had a controlling portion of the running shoe market ever since. In 2007 there were 500,000 Nike+ members. In 2013, there were 11,000,000 Nike+ members and the numbers keep growing.

f16

f17

Big Data

The success of Nike+ is an example of how measuring performance is useful to gaining key insight. For users, they get to understand their running patterns better. For Nike, servers worth of Nike+ data can be turned into strategic business decisions to improve company performance.

Nike’s vast amount of data on the physical performance of runners has turned into an initiative to see what more can be done with the data to make inferences upon social and mental behavior as well. The Nike+Accelerator program gave data access to software developers so that new startups could be created around the data available.

Nike also used the data to perform life-cycle analysis of the shoe material because not much was known about the sustainable quality since Nike’s supply chain is quite long. By analyzing data of 7 million users, most of which are long-term users, Nike was able to help 600 designers make cost-effective, quality-enhancing, and sustainable changes to product material.

All thanks to gamification. Stefan Olander, VP of Nike’s Digital Sport division, said, “People want credit for their physical activity.” It’s true. Not everyone can be a professional, but people are passionate about engaging in being active. Nike+ game mechanics turns everyone into smatter self-coaches through motivation and social inspiration. Nike turned the most uncomplicated sports in the world, running, into a data-driven social sport that gives users access to tons of data about their personal achievements. Runners can use this data to become better at running, resulting in a healthier lifestyle. In addition, Nike gives software developers open access to this data.

References

Deterding, S., Dixon, D., Khaled, R., and Nacke, L. (2011) From game design elements to gamefulness: defining gamification. Proc. Int. Academic MindTrek Conference pp. 9-15.

Huotari, K., and Hamari, J. (2012) Defining gamification: a service marketing perspective. Proceeding of the 16th International Academic MindTrek Conference, pp. 17-22.

Moss, M. (2015) ‘Gigital Marketing with Website Gamification’. Available at:
http://rocksdigital.com/digital-marketing-with-website-gamification-attract-engage-and-build-an-audience/
(Accessed: 06 September 2015).

Bartle, R.A. (2003) Designing Virtual Worlds. 1st edn. New Riders.

http://www.nikefuellab.com

Data Analytics for Large Personal Lifelogs

Introduction

As wearable technology has become significantly cheaper, people increasingly rely on such devices to record profiles of individual behaviour by capturing and monitoring personal activities. This activity is often referred to as lifelogging, Considering the heterogeneous nature of the data created, as well as its appearance in form of constant data streams, lifelogging shares features that are usually attributed to big data. A typical example of wearable cameras is Microsoft’s SenseCam that can capture vast personal archives per day. A significant challenge is to organise and analyse such large volumes of lifelogging data, turn raw data set was collected by different sensors into meaningful information to the users.

Screen Shot 2015-09-17 at 20.46.20

To date, various aspects of lifelogging have been studied, such as the development of sensors, efficient capture and storage of data, processing and annotating the data to identify events, improved search and retrieval of information, assessment of user experience, design of user interfaces for applications of memory aids, diet monitoring, or analysis of activities of daily living (ADL).

Given the relative success of these efforts, the research challenge has now shifted from broader aspects of data management to that of retrieving information from the vast quantities of captured data. Current applications address this by employing automatic classifiers for segmenting a whole day’s recording into events and searching the historical record, or by building ontology-based multi-concept classifiers and searching for specific events. More recent research suggests use of statistical mapping from low-level visual features to semantic concepts of personal lifelogs. It is important to note that these approaches are based on training classifiers from a set of annotated ground truth images. Although supervised methods can lead to more accurate outcome in terms of detecting known patterns, they require prior knowledge from a domain expert to be fed into the system. In addition, the result for the classifier depend heavily on the quality and quantity of the training data, i.e. are biased to detection of activities that are defined and known to the domain expert a priori. Given that visual lifelogs usually consist of large and often unstructured collections of multimedia information, such a ‘concept-based’ and ‘rule-based method’ for analysing lifelogging data is not suitable for all use-cases. Ideally, an algorithm should be able to detect unknown phenomena occurring at different frequencies in such data.

Data

In this study, the data were generated from one person wearing the SenseCam over a six day period, from a Saturday to a Thursday. These particular days were chosen in order to include a weekend, where normal home activity varied in comparison to events on weekdays or a working week. Data statistics are reported in below table 4.1. To create a ground truth(In machine learning, the term ground truth refers to the accuracy of the training set’s classification for supervised learning techniques. This is used in statistical models to support or reject research hypotheses.), the user reviewed her collection and manually marked the boundary image between all events.

Screen Shot 2015-09-17 at 19.07.40

Methods and Results

Detrended Fluctuation Analysis (DFA) was used initially to analyse image time series, recorded by the SenseCam and exposed strong long-range correlation in these collections. It implies that continuous low levels of background information picked up all the time by the device. Consequently, DFA provides a useful background summary.

Screen Shot 2015-09-17 at 19.09.41

In the plot of log F(n) vs log n for different box sizes (Figure 4.1), the exponent H=0.93203 is clearly greater than 0.5, and reflects strong long-range correlation on images from the SenseCam, i.e. indicates that the time series is not a random walk (A random walk is a mathematical formalisation of a path that consists of a succession of random steps.), but is cyclical, implying that continuous low levels of background information are picked up constantly by the device. Consequently, the DFA provides a measure of many similar ‘typical’ backgrounds or environments.

The dynamics of the largest eigenvalue were examined, using the Wavelet Transform method. The technique gives a clear picture of the movements in the image time series by reconstructing these using each wavelet component. Some peaks were visible across all scales as shown in Figure 4.2. Studying the largest eigenvalue across all wavelet scales, it provides a powerful tool for examination of the nature of the captured SenseCam data.

Screen Shot 2015-09-17 at 19.09.50

Large Personal Lifelogs

What is the Quantified Self?

Our body is constantly sending out signals that, if listened to carefully, allow us to better understand the state of our personal well-being. For example, feeling weak and tired after a long sleep can be seen as indication that the quality of the sleep was low. It might reveal evidences regarding our personal fitness level or our mental state. Being aware of the importance of our body signals, medical doctors heavily reply on them when they have to make a diagnosis. Adopting these methods, a growing number of people have now started to constantly measure the fitness level of their bodies, using a variety of equipment to collect and store the data. These individuals can be considered to be part of the Quantified Self (QS) movement that uses instruments to record numerical data on all aspects of our lives: inputs (food consumed, surrounding air-quality), states (mood, arousal, blood oxygen levels), and performance (mental, physical). The data acquisition through technology: wearable sensors, mobile apps, software interfaces, and online communities.

The technology research and advisory company Gartner predicts that by 2017, the Quantified Self movement will evolve into a new trend with around 80% of consumers collecting or tracking personal data. Moreover, they predict that by 2020, the analysis of consumer data collected from wearable devices will be the foundation for up to five percent of sales from the Global 1000 companies. Given these predictions, it comes with no surprise that more and more companies try to enter the market with novel wearable device. In fact, a multitude of devices, services and apps are available now that track almost everything we do nowadays.

f10

Lifelogging

Memory is the process by which information is encoded, stored, and retrieved. Given enough stimuli and rehearsal, humans can remember information for many years and recall that information whenever required. However, not all stimuli are strong enough to generate a memory that can be recalled easily. One of the most valuable and effective ways to reduce the impact of Age-related Memory Impairment on everyday functioning is by using external memory aids. Lifelogging technologies can significantly contribute to the realisation of these external memory aids. Lifelogging is the process of automatically, passively and digitally recording aspects of our life experiences. This includes visual lifelogging, where the lifeloggers wear head-mounted cameras or cameras mounted in front of the chest, that capture personal activities through the medium of images or video. Despite its relative novelty, visual lifelogging is gaining popularity because of projects like the Microsoft SenseCam.

The SenseCam device is a small, lightweight wearable device that automatically captures a wearer’s every moment as a series of images and sensor readings. It has been shown recently that such images and other data can be periodically reviewed to recall and strengthen individuals’ memory. Normally, the SenseCam captures an image at the rate of one every 30 seconds and collects about 4,000 images in a typical day. It can also be set up to take more frequent images by detecting sudden changes in the wearer’s environment such as significant changes in light level, motion and ambient temperature. The SenseCam generate a very large amount of data for a single typical day.

f11

BIG Lifelogging data

“Big Data” applications are generally believed to have four elements which popularly characterise a big data scenario, and these are volume, variety, velocity and veracity. In this section we will examine how lifelogging does, or does not conform to those four characteristics because there are certain advantages which “big Data” technologies could bring to the lifelogging application.

f12

Lifelogging is essentially about generating and capturing data, whether it comes from sensors from sensors, our information accesses, our communications, and so on. One characteristic which makes lifelogging a big data application and poses both challenges and opportunities for data antlyics, is because of the variety in the data sources.

Primary data includes sources such as physiological data from wearable sensors (heart rate, respiration rate, galvanic skin response, etc.), movement data from wearable accelerometers, location data, nearby bluetooth devices, WiFi networks and signal strengths, temperature sensors, communication activities, data activities, environmental context, images or video from wearable cameras, and that does not take into account the secondary data that can be derived from this primary lifelog data through semantic analysis. All these data sources are tremendously varied and different. In lifelogging, all these varied sources merge and combine together to form a holistic personal lifelog where the variety across data sources is normalised and eliminated.

The velocity of data refers to the subtle shifting changes in patterns within a data source or stream, and this is not much of an issue for lifelog data, yet, because most lifelogging analysis and processing work is not done in applications which require identifying a pattern or a change in real time. This is one of the trends for future work; real-time pattern analysis could potentially be employed for healthcare monitoring and real-time interventions.

Lifelogging generates continuous streams of data on a per-person basis, however despite the potential for real-time interactions, most of the applications for lifelogging we have seen to date do not yet operate in a real-time mode. So while lifelogging does not yet have huge volume, this volume of data is constantly increasing as more and more people lifelog. For a single individual, the data volumes can be large when considered as a Personal Information Management challenge, but in terms of big-data analysis, the data volumes for a single individual are small. Considering a lifelog of many people, thousands, perhaps millions, all centrally stored by a service provider, then the data analytics over such huge archives becomes a real big-data challenge in terms of volume of data.

Finally, veracity refers to the accuracy of data and to it sometimes being imprecise and uncertain. In the case of lifelogging, because much of our lifelog data can be derived from sensors which may be trou- blesome, or have issues of calibration and sensor drift, as described in Byrne and Diamond (2006). Hence, we can see that lifelogging does have issues of data veracity which must be addressed. Semantically, such data may not be valuable without additional processing. In applications of wireless sensor networks in environmental monitoring, for example, trust and reputation frameworks to handle issues of data accuracy have been developed, for example RFSN (Reputation-based Framework for High Integrity Sensor Networks) developed by Ganeriwal et al. (2008). Similarly, data quality is a major issue in enterprise information processing.

References
Byrne, R. and Diamond, D. (2006). Chemo/bio-sensor networks. Nature Materials, 5(6):421-424.

Ganeriwal, S., Balzano, L. K., and Srivastava, M. B. (2008). Reputation based framework for high integrity sensor networks. ACM Transactions on Sensor Networks, 4(3):1-37.

http://quantifiedself.com

http://research.microsoft.com/en-us/um/cambridge/projects/sensecam/