Weekly blog post
1. How do we define which information is relevant to a topic and which we can ignore?
2. How do we decide how to divide up this information?
3. What are the implications of these decisions?
We learnt about classificiation and metadata during this week’s lecture and readings and i will like to use Internet of Things (IoT) to illustrate how the revolution in technology is going to generate information on a scale we may not be even prepared for.
A link with some fun build-it-yourself IoT projects – https://www.losant.com/blog/7-cool-iot-projects-worth-checking-out
The article started off stating the reason for the rise of IoT projects, not just because of advancing technology and creative ideas but also “thanks to the plunging cost of compute and growing availability of good microprocessor modules, there’s an abundance of fun and life-enhancing connected device projects..” (Henderson, 2016)
With IoT, we all see the potential of making use of everyday items to solve problems – developers can unlessh their creativity in trying to code in objects, instead of limiting themselves to a computer screen. However, to ensure consistency, developers have to deploy a standardised digital infrastructure or system that can be shared across the various objects. In order to do that, or before we are ready for such progression, are we ready for the incoming flood of data that these connected objects will provide?
Right now there is Google Analytics we have on websites – we track users’ bounce rate, dwell time, return frequency and so on. But what if we have different (and possibly more detailed) information received from different objects?
Metadata, in my understanding, is a set of data that gives information about another data. It is important – but with IoT, we should note how we can better manage the current sets of data, and its metadata; not to mention how stores of such data can be a potential risk for hacking.
Inspired from article – www.lifehacker.com.au/2015/02/why-the-internet-of-things-is-a-problem-for-metadata-retention/
Going back to the 3 questions posted during the online lecture this week then, i see the importance of classification.
Using the need for finding a good dining place to eat as an example, we see ourselves (for my peers at least) constantly looking at Instagram for potential ideas; searching with the hashtags “jiaklocal”, “burppleeats”, or even “sgfood”. When did the phenomenon of searching via hashtags started? Since when the decision to go visit a restaurant or not is highly dependent on the aesthetics of the photograph taken? This is also provided the picture turns up first (for users undergoing peripheral processing of information and want to make quick decisions based on what turns up first), and probably for numerous times (indication of high customer frequency).
The metadata provided by the photographs are insufficient to judge whether or not a restaurant is good or bad – the depth of perception is limited and imagery is subjective.
That being said, it is interesting to note how classification via photos is done to gather information; and how can we better improve the classification of photographs to allow ourselves make better, informed decisions.
Also came across this:
Extracting metadata out of photographs
With the prevalance of searching using visual imagery, will we one day be able to sync our photo albums (like our digital footprints) and share to the public (what we want to share) without the need of a physical action to post and share the pictures? I think this can pose a good discussion topic to understand the psychological underpinnings of an individual when he/she is sharing every moment, with the proliferation of such social apps in the market right now.