Monday, November 16, 2009

On OurTube: “Open video” could beget the next great wave in web innovation - if it gets off the ground

On OurTube: “Open video” could beget the next great wave in web innovation - if it gets off the ground1

Earlier this semester, I read an article on remixing videos by Jonathan McIntosh. In the same issue of the Technology Review, David Talbot discusses the trials and tribulations of getting “open video” into the mainstream. It begins with the story of Michael Dale and Abram Stern, who decided to make video remixes using C-SPAN coverage of Senate floor speeches. They ran across difficulties due to copyright laws, but could not find a viable alternative source for the speeches, partly due to formatting issues. Open standards would accelerate the searching process, but it’s time for widespread implementation has not yet arrived.
Dale now works for the Wikimedia Foundation, which has just this goal in mind. They want to develop “open video” before there is a demand for it because they believe that, in general, the necessity of this sort of innovation is not obvious to the public. As Chris Blizzard of Mozilla states, “Open standards create low friction. Low friction creates innovation. Innovation makes people want to pick it up and use it. But it’s not something where we can guess what ‘it’ is. We just create the environment that lets ‘it’ emerge.”
HTML5 is part of this new approach. There are no plug-ins required because an open-source player is included in the browser. Creative Commons is also helping by creating precedent in the domain of copyright law, helping to establish open licensing.

Reference:
Talbot, David (2009) OurTUBE: “Open video” could beget the next great wave in web innovation - if it gets off the ground.Technology Review Vol. 112/No. 5

Monday, October 5, 2009

Compassion Map


The elements of compassion.

Saturday, September 26, 2009

Emergence and “Swarm Intelligence”


Emergence occurs when numerous simple interactions take place and develop a secondary complexity or a discernible pattern. A more rigorous definition exists mathematically and in physics, but this layman’s explanation works well to explain some of the intricacies related to everything from economics to trends in virtual-environment rules of etiquette. In “Swarm Intelligence” by James Kennedy and Russell C. Eberhart, emergence is introduced in chapter one as a key term for understanding evolution in culture and society.
The first chapter of “Wikinomics” discusses a grand world of collaboration and its subsequent consequences. These consequences are the emergent phenomena of a social construct. There are predictable occurrences, but the unpredictable effects and variations also make an interesting study. The authors of “Swarm Intelligence” state, “Our argument is that cultural evolution should be defined, not as operations on ideas, but as operations on minds. The evolution of ideas involves changes in the states of minds that hold ideas, not changes in the ideas themselves; it is a search - by minds - through the universe of ideas, to find the fitter ones.”
Thinking of cultural evolution in this way allows for additional commentary on what may or may not occur due to the new “wikinomics” coming into prevalence. One side effect that is already being witnessed is the change in business practices. As Tapscott and Williams write, “Publishers of music, literature, movies, software, and television are like proverbial canaries in a coal mine - the first casualties of a revolution that is sweeping across all industries.” The benefits of collaboration also represent another exciting avenue for discovery. “Swarm Intelligence” provides an excellent method for interacting, mentally, with this complex world.
Recommended reading:
Kennedy, James and Russell C. Eberhart, with Yuhui Shi. Swarm Intelligence. San Francisco: Morgan Kaufmann Publishers. 2001.

Thursday, September 24, 2009

Douglas Engelbart: Augmenting Human Intellect and Bootstrapping

The Roaring Twenties witnessed fast economic growth until the Wall Street Crash of 1929. Before finally bottoming out in 1932, the Dow Jones Industrial Average would lose 89% of its value.1 Growing up during this time period meant a more simplified view of one’s needs. This led to a definitive moment that occurred in Douglas Engelbart’s life when he was just 25 years old. During the December of 1950, after taking stock of his current life, with his job at the National Advisory Committee on Aeronautics, and figuring out that he had achieved the goals he had set forth as a kid growing up during the Great Depression to “get an education, get a steady job, get married”, he discovered that he “no longer had any goals.”2 On a drive through Northern California, being an electrical engineer at Ames Research, he actually calculated that he had roughly 5.5 million more minutes to work during his life. He had to figure out what to do with this precious time.
With no particular designs on getting rich or changing careers, he though about saving the world. And while pondering about how to fulfill this need after several months he came up with the following insight:
FLASH-1: The difficulty of mankind’s problems was increasing at a greater rate than our ability to cope. (We are in trouble.)
FLASH-2: Boosting mankind’s ability to deal with complex, urgent problems would be an attractive candidate as an arena in which a young person might try to “make the most difference.” (Yes, but there’s that question of what does the young electrical engineer do about it? Retread for a role as educator, research psychologist, legislator...? Is there any handle there that an electrical engineer could...?
FLASH-3: Aha - graphic vision surges forth of me sitting at a large CRT console, working in ways that are rapidly evolving in front of my eyes (beginning from memories of the radar-screen consoles I used to service.)3
After working through some details regarding the implementation of such a device where a “general-purpose, computer-powered information environment”4 would assist with network-based collaborations between colleagues, he dubbed this new route in his career “augmenting the human intellect.” However, it wouldn’t be so easy to get others to share in his vision or to provide funding for this lofty endeavor.
Within the context of 1950’s technological ability, this vision still seemed unattainable to others in his department or in the business world. Afterall, the leading edge in technology was the IBM 704, capable of executing up to 40,000 instructions per second. So instead of developing the idea of augmenting human intellect for his PhD, he wrote his dissertation on bi-stable gaseous plasma digital devices. After graduating, he tried to find a more “congenial” environment for his augmentation work, but found out that he had to subsume his intentions even at the Stanford Research Institute where he began working in October 1957.
When he was finally able to work on augmentation in 1959 after receiving funding from the Air Force’s Office of Scientific Research, it was difficult to get intellectual backing for his project because he had to place his augmentation research squarely in the realm of other’s disciplines in order for them to be responsive to his message. In general, his augmentation environment was written off “as just another information-retrieval system.” So, in order to be taken more seriously, he decided to create a manifesto, taking him almost two years to write.
“Augmenting the Human Intellect: A Conceptual Framework” was finished in 1962. Engelbart believed technology could augment human intellect by developing “an integrated hierarchy of cooperative mancomputer process capabilities.” This would “step-up” the mental abilities of a person level by level to be able to handle more complex thought processes. He also explains that, “We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully coexist with powerful concepts, streamlined technologies and notation, sophisticated methods, and high-powered electronic aids.”5
In his paper, Engelbart writes that two people during the previous two decades had “speculated upon the possibilities of close manmachine cooperation.” Those influences were Vannevar Bush and J.C.R. Licklider. Bush coined the term Memex to describe a system where items were categorized by associative indexing and then searched for using a specialized workstation. Licklider (1960) defined a concept called “man-computer symbiosis”, a system whereby humans and computers work in conjunction to “think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”6 Engelbart also placed his aspirations within the context of work being done by Vazsonyi, Morse, Teager, Culler and Huff. One amongst his list would prove to be a powerful supporter and ally.
While Engelbart attempted to figure out how human beings have so far evolved to deal with complex situations, he tried to receive funding from various sources. His proposal for creating an “interdisciplinary Knowledge Augmentation Laboratory that could pursue the technology of human augmentation as quickly as possible”7 finally came across the desk of J.C.R. Licklider at ARPA.
By 1963, Engelbart had funding. Later he would explain, “Lick was the first person to believe in me. And he was the first person to stick his neck out and give me a chance. In fact, if he hadn’t done that, if he hadn’t stuck his neck out and given me money, I don’t think anybody ever would have done so. That was why I trusted him. Lick was like my big brother.”8
While working towards augmenting human intellect, he would invent the mouse, work on “hypertext systems as part of the NLS” (oNLine System), and work towards his “vision for a personal workstation that can legitimately be thought of as one of the sources of ideas for the personal computer.”9
The NLS was first publicly demonstrated at the 1968 Fall Joint Computer Conference during a 90-minute multimedia presentation where Engelbart debuted the mouse, hypermedia, and on-screen video teleconferencing. In 1989 he founded the Bootstrap Institute, with its main focus being to create high-performance organizations that include “pro-active participation from stakeholders” in all realms of influence including government, industry, and society.
Engelbart, with his evolving pursuits, realizes that with the quick development of technology coming to surpass the development of human intellect, certain adjustments must be made. As the sophistication of technology increases, society has to continually revise it’s methods of information appropriation to accommodate these changes. Engelbart (1998) has conceded this need for modification in the following quote:
Real social danger today is that the technology is erupting and moving so much faster than it ever ever ever has in all of our historical experience ... [It’s] time to start adapting society to this revolution in the technology. There’s a lot of potential dangers ahead if we don’t adapt it successfully.10
Stated more succinctly as the Law of Disruption, Downes and Mui in Unleashing the Killer App (2000) write that “[s]ocial, political and economic systems change incrementally, but technology changes exponentially.”11
The bootstrapping approach is being developed upon currently by the Doug Engelbart Institute. Additionally, in 2005 Engelbart received funding for the HyperScope project from the National Science Foundation. Once again, we find that time often leads to newer technology that is more easily capable of achieving previously stated goals. In this case, Hyperscope uses Ajax and DHTML to recreate the abilities of the NLS and the renamed software Augment that can link within and across documents in order to “engage a wider community in a dialogue of collaborative software and services.”12
Augmenting human intellect as a theory and aspiration has itself been further augmented to suggest that all facets of society must come together and work towards a common goal. As coined by Engelbart, organizations can improve the process they use for improvement, thereby iteratively compounding the effect. As a goal for HCI, iterative progress towards making information more usable, intuitive, and effective would be the perfect compliment to Engelbart’s dream. It seems that intellect’s final compatriots would be responsibility and compassion, and if this has not yet been addressed, perhaps it will be in the next iteration.
--Christine Rosakranse, for Comm-6480



Resources:
1 "Wall Street Crash of 1929." Wikipedia, The Free Encyclopedia. 20 Sep 2009, 16:26 UTC. 20 Sep 2009
.
2 Waldrop, M. Mitchell. The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal. New York: The Penguin Group. 2001. pg. 211.
3 Waldrop, M. Mitchell, The Dream Machine. pgs, 211-212
4 Waldrop, M. Mitchell, The Dream Machine. pgs, 212
5 Engelbart, Douglas C. Augmenting Human Intellect: A Conceptual Framework. Summary Report AFOSR-3223 under Contract AF 49(638)-1024, SRI Project 3578 for Air Force Office of Scientific Research, Stanford Research Institute, Menlo Park, Ca., October 1962.
6 Jacko, Julie A., and Andrew Sears, eds. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. New Jersey: Lawrence Erlbaum Associates, Publishers. (2003) pg. 3.
7 Waldrop, M. Mitchell. The Dream Machine. pg. 216.
8 Waldrop, M. Mitchell, The Dream Machine. pgs, 217.
9 Jacko, Julie A., and Andrew Sears, eds. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. New Jersey: Lawrence Erlbaum Associates, Publishers. (2003) pg. 5
10 Brown, John Seely and Duguid, Paul. The Social Life of Information. pg. 84 (Engelbart, interviewed on National Public Radio’s Morning Edition, 11 December 1998)
11 Downes, Larry, and Chunka Mui. 2000 Unleashing the Killer App: Digital Strategies for Market Domination. Boston: Harvard Business School Press.
12 "Douglas Engelbart." Wikipedia, The Free Encyclopedia. 22 Sep 2009, 00:43 UTC. 22 Sep 2009 

Monday, September 21, 2009

Remix: On “Open Video in Practice” Technology Review, Vol. 112/No. 5, Sept/Oct 2009 Issue


Remixing, though prevalent in our society in myriad ways, remains an area of contention between rights and fair use. Shortly before reading Lessig’s Remix, I had just finished reading an article in the Technology Review (Sept/Oct 2009 Issue) entitled “Open Video in Practice” and subtitled “How a remix was made - and how it could have been easier” (pg. 76). This short article was nestled in a larger article on “open video” called “OurTube”. This shorter article caught my attention because it had pictures of fictional character from two vampire related fictions, Buffy the Vampire Slayer and Twilight.
Technology Review rarely has pictures of standard celebrities and this article had the main characters Edward Cullen and Buffy Summers in the right hand corner. Jonathan McIntosh, a New York based video artist, created a remix of scenes with these two characters where Edward Cullen becomes a more stalker-like figure with weird facial twitches than a star-crossed lover. Using Buffy as a strong female character, Jonathan points out the innate creepiness of Edward’s relationships.
In order for Jonathan McIntosh to create this remix he had to find the right dialogue and did so by sifting through Google text searches of “fan-transcribed dialogue”. The article states that this laborious task could be simplified with open video standards. This would involve a searchable database of written dialogue coupled with video clips. Over time an archive could be developed with clips of everything from congressional hearings to horror movies.
Making these video clips available in this format would not only mean a shorter route for finding suitable segments for any given remixing project, but it also suggests a more streamlined editing process. Currently, this type of video amalgamation represents a borderland to legal precedent. No media company has yet tried to have the Buffy/Twilight remix removed. It is available at www.rebelliouspixels.com. In terms of opening the world of media to creative reprocessing, this mash-up acts as one milestone towards open video standards.

Tuesday, September 15, 2009

Augmenting Human Compassion


“Augmenting human intellect” represents one of the continual goals of any well-designed technology. In the course of our readings, three major influences have sprung up time and again as to how we, as a society, may go about using technology to evolve our cognitive capabilities. For Douglas Engelbart (1962), technology may do so by leveraging already existing perceptual mappings or by bringing the mental abilities of a person up to a level of more complex thought through various methods including “streamlined terminology” and “powerful concepts.” Licklider (1960) also defines a similar concept called “man-computer symbiosis”, a system whereby humans and computers work in conjunction to “think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” William Ross Ashby (1956) also wrote of “amplifying intelligence” in his work on cybernetics. However, intelligence needs to be countered and balanced by basic moral and ethical considerations. I would argue that the foundation of these considerations lies squarely with an entity’s capability to feel compassion. Additionally, with a certain level of compassion (and intelligence), ethics, as in a listing of rules or system of conduct, becomes secondary. So, instead of concentrating on the aspect of human cognitive evolution defined as intellect, I would like to find methods for augmenting human compassion using digital media.
Compassion as a component of a healthy mental state and as a necessity for a large societal organization is a sometimes marginalized concept. That may be because it is seen as a responsibility of parents and families to develop compassion in their children. Certain concepts are only slowly adopted into the mainstream’s consideration. As it is, the research into human-computer interaction focuses mainly on functionality and usability. Even more human-centered designs are concerned with business considerations such as turnaround and click-through analytics.
For example, while reading Myers’ A Brief History of Human Computer Interaction Technology, I found that the introduction clearly states that his history only covers the “computer side of HCI” and that “a companion article on the history of the ‘human side,’ discussing the contributions from psychology, design, human factors and ergonomics would also be appropriate.” This “human side” approach would form the basis of my research project for determining how one might augment human compassion.
Discovering what makes one more compassionate would be the first topic for research. Within the context of digital media and within the constraints of one semester, it seems daunting to hone compassion down to a measurable aspect, but I hope that by making an open call for ideas that some epiphany will come about.
From the historical perspective, we can see that in developing his conceptual framework for augmenting human intellect, Engelbart defines the objectives for his study and covers his basic perspective. He promotes leaving room for intuition or a human’s “feel for a situation”. For augmenting compassion, I would also say that one would have to leave room for epiphany as well.
My first avenue for exploration could include researching whether or not any current internet memes act to augment compassion. From cute LOLCATS with funny captions to YouTube videos like Christian the Lion, does sharing these with others help to augment our society’s overall level of compassion? And, conversely, is sharing morbid imagery damaging to compassion? One caveat comes with the level of interaction that might be necessary for long-term effects. If one sees something, is it enough to have a persisting effect? Or must one also be involved somehow to ensure a stable change in mentality?
Given these possibilities for investigation, another avenue for exploration that might prove to engender long-term increases in compassion levels would involve the integration of a participation component through interactive art or music. If a lack of compassion stems from a lack of empathy with others or with a disconnect from humanity or nature, then a key component to developing compassion in others would involve creating a palpable connection to others and, thereby, to humanity in general. With interactive art, the person becomes a component of the creation, a powerful metaphor that might prove helpful for compassion development. However, a connection beyond the computer might also be necessary for augmentation. An association from human to art piece to creator of art piece to humanity would be ideal.
Following Engelbart’s format, the objective of a study taking in the previous suppositions and conjectures would include the following goals: (1) to find the factors that determine a given individual’s level of compassion; and (2) to develop methods that would act to augment human compassion using digital media. Engelbart’s specifications for his framework still fit for this research direction.
Step one would be to find a test for compassion so that quantitative results can verify any changes over time, from before exposure to the stimulus to afterwards. Step two would involve testing non-participatory stimuli such as the YouTube videos for changes in levels of compassion. Step three could then cover participatory situations of varying complexity.
As this blog post/response essay is written in response to our second week’s readings on the subject of History in Perspective, any further reading suggestions along these lines of augmenting compassion, augmenting empathy, or developing emotional intelligence would be greatly appreciated. Any studies that have been performed on the effect of interactive art would also be of great interest to me. Usually I am not one for trying to pin down the exact meaning or relevance of a piece of art, but in the context of a compassionate evolution I would concede the necessity for some formal investigation into the matter.
I believe that the nexus of intelligence and compassion would negate the need for overly strict rules that may be based on a narrow or subjective morality. The ultimate goal for technological society must include room for this augmented compassion.
--Christine Rosakranse

Thursday, August 6, 2009

Monday, May 11, 2009

The different theoretical approaches to the concept of presence. What is the best approach?


“Presence is a multi-dimensional concept; i.e., there are different types of presence.”1

The concept of presence is definitely one that has not yet been concretely and throughly defined or operationalized though many attempts at covering the different aspects of presence have been made by professional academics across many fields. Lombard and Ditton2 list six conceptualizations that have been developed to define the ambiguous concept of presence. Taken as a whole, these conceptualizations represent facets of an overarching definition instead of a complete and absolute explanation unto themselves. Utilizing this approach to develop a faceted classification scheme would then cover a significant amount of the intricacies of presence better than any individual conceptualization. However, a more concise listing of the facets would have to be developed.
Currently, those facets of presence listed by Lombard and Ditton, among others, are nonorthogonal with common elements overlapping between the groups. The International Society for Presence Research (ISPR) takes this grouping further and lists five groups of approaches, but this list can also be more tightly and exactly configured. For example, presence as social richness, as mentioned in Lombard and Ditton’s paper “At the Heart of It All: The Concept of Presence”, is incomplete because its defining parameters focus on the medium itself and not on the actual interaction. Measurements of “capacity for immediate feedback, the number of senses involved, personalization and language variety” do not necessarily represent the intricacies of presence when dealing with a mediated interaction. The social aspect of presence, in terms of a faceted classification scheme, would include this concept of social richness as well as some of the following conceptualizations that focus on social elements of interaction as mentioned by the ISPR: “social presence”, "social actor within the medium", "parasocial interaction" , "co-presence", "transportation: shared space", and "medium as social actor".
Presence as realism creates a bifold definition that can be taken as either “social realism” or “perceptual realism”. In this way, it also takes the interaction at a more superficial level than would a deeper, more expounded upon theory. However, from this concept we can take its two distinct definitions and add these to the vocabulary of a classification for presence. As defined by the ISPR:
"’Social realism’ occurs when part or all of a person's perception fails to accurately acknowledge the role of technology that makes it appear that s/he is in a physical location and environment in which the social characteristics correspond to those of the physical world.”

Perceptual realism is linked with the similar terms “sensory presence," "naturalness," "ecological validity", and "tactile engagement".
Presence as transportation starts to include more of the relevant elements of a mediated interaction. However, as these three elements are defined in Lombard and Ditton’s paper, the three types of transportation that occur are apparently considered mutually exclusive when this is not necessarily the case. In 1997, when the paper was written, perhaps these distinctions were more obvious. Currently, the borders between “You are there”, “It is Here”, and “We are Together” are slowly dissolving. This is especially true of online games where more than one live player joins together using microphones to communicate for missions and strategic planning.
Some of the aspects of presence as transportation are grouped with "spatial presence," "physical presence," "a sense of physical space," "perceptual immersion," and "a sense of being there". These occur when all or part of a person’s perception is submerged into a mediated environment. In other words, when they are “there”.
Lombard and Ditton list two more conceptualizations beyond the previously stated presence as social richness, realism, transportation, and immersion. Presence as social actor within medium was previously grouped with other social aspects. It is a voluntary willingness to “overlook the mediated or even artificial nature of an entity within a medium and attempt to interact with it” that becomes evident in environments such as Second Life and other games where characters, whether human-based or computer avatar, are treated with the same amount of care and consideration. Lastly, presence as medium as social actor is well-represented by “A Space Odyssey’s” Hal. This is also part of the social aspects already mentioned.
The International Society for Presence Research also includes another group of conceptualizations that include "engagement," "involvement," and "psychological immersion". This does not directly translate to one of the previously mentioned facets. However, this simply reveals one of the flaws of the current system. Engagement and involvement signify how much people get “sucked into” the medium and, therefore, become more closely tied to the occurrences therein.
Using these differentiations of experience does break the conceptualizations into only five categories, but these still involve a good deal of overlap. To be more concise, a listing should be made where a single experience can be “chopped up” into unique facets. For an initial foray into this realm, I would include essence of interaction (physical, mental), timeline of interaction (synchronous, asynchronous, one-time, continual), medium of interaction or senses involved (typing - touch, visual, auditory), and the intention of interaction (learning, gaming, etc.). This is very inchoate listing and more research would have to be done, but I believe this to be the most efficient direction for future presence research.
References:
Denton, William. "How to Make a Faceted Classification and Put It On the Web" Nov. 2003. http://www.miskatonic.org/library/facet-web-howto.html
Ditton, T. and Lombard, Matthew. “At the Heart of It All: The Concept of Presence.” JCMC 3 (2) September 1997.
International Society for Presence Research. (2000). The Concept of Presence: Explication Statement. Retrieved from http://ispr.info/

Tuesday, January 27, 2009

Nielsen's "Usability" and Some Questions (Part 3/3)


Nielsen says, “Peoples’ attitudes toward computers in general should probably be seen as a component of the social acceptability of computers rather than their usability.” One interesting outgrowth of this statement would be the fact that as time progresses and technology finds its way into true ubiquity, we will find fewer instances of true novices and social accessibility will increase regardless of a designer’s intent.
Where Jakob Nielsen discusses usability trade-offs several good comments surface, including the fact that one cannot design the perfect interface for all user types. Also, where other considerations, such as security measures, take precedence, ease of use sometimes suffers. This area naturally leads into section 2.5, where he discusses categories of users and user differences. This is an essential element to be covered in these beginning chapters. Figure three on page 44 shows the matrix of users’ experience which is also enlightening.
The way that Nielsen goes on to describe the systems as they pertain to the matrix is quite eye-opening especially if someone has never considered these interfaces from the perspective of the user. He also introduces the concept of quartiles in signifying which user is at the high end of a given spectrum or the lower end.
At the end of this chapter, Nielsen also mentions that it is not necessarily the best idea to permit the user to customize their own interface beyond a certain cosmetic point. This point is followed up in later chapters, but it does provide the context for additional questions.
Question:
How does one decide how much of an interface should be customizable (for example, in the case of Facebook versus Myspace)?

Nielsen's "Usability" and Some Questions (Part 2/3)


Memorability, on the other hand, is one of those elements that might not seem immediately obvious when user-testing first occurs, but many of us have experienced the loss of system proficiency after not having used a product for any extended period of time. I continue to experience some amount of anxiety when moving between Adobe products when the quick-keys for certain shared functions is not the same or when the interface (eg. in Illustrator) is different between a Mac and a PC. Additionally, whereas errors are also easily recognized as a hindrance to production, subjective satisfaction would seem to be a derivative of the other attributes for usability.
Question:
Does subjective satisfaction deserve to be an individual element or is it a derivation of the other attributes of usability?

Nielsen's "Usability" and Some Questions (Part 1/3)


What is usability?
One of the key concerns for Usability professionals is being able to answer the question “What is Usability?” According to Jakob Nielsen, usability can be defined by the following five attributes: learnability, efficiency, memorability, errors, and satisfaction. Each of these attributes would seem obvious once you read the list of them, but, of course, being able to come up with a complete and concise list is the difficult part. For each of these attributes, he then further explicates the details. In the case of learnability, I’m glad that Nielsen has differentiated between novice and expert users. However, he does not necessarily include level of comfort in mastering an interface. Holding usability to a higher standard would involve taking learnabilty to the level of mastery and a high degree of comfort. Currently “learning software” is the strongest bastion for these more stringent levels of learnability.
One question for this section would be:
How does one use leading edge teaching techniques to increase cognition and make learnability more core to usability?