CS 6750 Test 2

75 flashcards covering mental models, GOMS/KLM, distributed cognition, activity theory, value-sensitive design, evaluation methods, and Agile HCI for CS6750

What You'll Learn

Free flashcards for Human-Computer Interaction Test 2 topics: mental models, slips vs. mistakes, representations, GOMS and KLM task analysis, cognitive task analysis, distributed cognition, situated action, activity theory, value-sensitive design, heuristic evaluation, cognitive walkthrough, qualitative vs. empirical evaluation, and Agile HCI integration. Ideal for graduate HCI students studying CS6750 at Georgia Tech.

Key Topics

  • Mental models, representations, slips, and mistakes: types, causes, and design implications
  • GOMS, KLM, CPM-GOMS, and NGOMSL: task analysis models for expert performance prediction
  • Cognitive Task Analysis: strengths, weaknesses, and correspondence to the predictor model
  • Distributed Cognition: Hutchins's cockpit example and the socio-technical unit of analysis
  • Situated Action and Activity Theory: goals, artifacts, and contrasts with distributed cognition
  • Value-Sensitive Design: direct/indirect stakeholders and three investigation types
  • Artifacts have politics (Langdon Winner): Moses bridges, McCormick molding machines
  • Heuristic Evaluation: Nielsen's 9 heuristics, evaluator count, and coverage findings
  • Cognitive Walkthrough: preparation and evaluation phases, gulfs of execution and evaluation
  • Qualitative, empirical, and predictive evaluation: when to use each in the design life cycle
  • Agile vs. User-Centered Design: parallel track method, Boehm & Turner conditions
  • Formative vs. summative evaluation, A/B testing, live prototyping, and usability metrics

Looking for more human computer interaction resources? Visit the Explore page to browse related decks or use the Create Your Own Deck flow to customize this set.

How to study this deck

Start with a quick skim of the questions, then launch study mode to flip cards until you can answer each prompt without hesitation. Revisit tricky cards using shuffle or reverse order, and schedule a follow-up review within 48 hours to reinforce retention.

Preview: CS 6750 Test 2

Question

What is a mental model?

Answer

A person's internal understanding of how something in the real world works — used to simulate events and make predictions. Example: you predict a basketball will bounce before it hits the ground. In HCI, designers aim to build systems that match users' existing mental models or that teach users accurate new ones.

Question

What is a representation in HCI?

Answer

The way a problem, system, or interface presents information to a user. A good representation makes the solution self-evident. Example: representing the wolves-and-sheep problem visually (with icons) makes it far easier to solve than describing it in audio alone.

Question

What is a slip?

Answer

A user error where the user has the CORRECT mental model but does the wrong thing anyway. Example: knowing you should click 'Yes' to save but clicking 'No' because the buttons are in an unexpected order. Sub-types: action-based slips and memory-lapse slips.

Question

What is a mistake?

Answer

A user error where the user has the WRONG mental model and therefore takes the wrong action. Example: a dialog says 'Revert to original file?' and the user doesn't know what 'revert' means — any choice may be wrong. Sub-types: rule-based, knowledge-based, and memory-lapse mistakes.

Question

What is the difference between a slip and a mistake?

Answer

A slip = right mental model, wrong action (execution failure). A mistake = wrong mental model, wrong action (planning failure). Slips are addressed with better interface design (constraints, mappings). Mistakes are addressed with better representations and feedback.

Question

What is an action-based slip?

Answer

A slip where the user performs the wrong physical action or performs the right action on the wrong object, even though they knew the correct thing to do. Example: clicking 'No' when you meant to click 'Yes' because your hand muscle-memoried the left button.

Question

What is a memory-lapse slip?

Answer

A slip where the user forgets to do something they knew they needed to do. Example: forgetting to start the microwave timer after placing your food inside. The 'save before closing' dialog exists specifically to prevent this type of slip.

Question

What is a rule-based mistake?

Answer

A mistake where the user correctly understands the situation but applies the wrong rule. Example: a user wants to save their work, knows the dialog is asking about saving, but incorrectly believes 'No' is the save option.

Question

What is a knowledge-based mistake?

Answer

A mistake where the user misunderstands the situation itself. Example: a user doesn't realize they made any changes to a document, so when a 'save changes?' dialog appears, they click 'No' — not realizing there were changes to save.

Question

What is learned helplessness?

Answer

A condition that develops when users experience repeated failures or poor feedback from an interface, causing them to believe they are permanently incapable of using the system and to stop trying. Caused by interfaces that give inadequate or confusing error feedback.

Question

What is expert blind spot?

Answer

The tendency for expert designers to forget what it was like to be a novice, leading to designs that assume knowledge users don't have. Example: writing assembly instructions that assume the reader knows what 'golden brown' looks like. Key insight: you are not your own user.

Question

What are the 5 characteristics of a good representation?

Answer

(1) Makes relationships explicit; (2) Brings objects and relationships together; (3) Excludes irrelevant information; (4) Exposes natural constraints; (5) Is intuitive and familiar. A good representation makes the solution to a problem self-evident without extra explanation.

Question

What is the GOMS model?

Answer

A human information processor model for task analysis. GOMS stands for Goals, Operators, Methods, and Selection Rules. It treats the user as an input-output machine and models interaction at the level of explicit steps. Useful for predicting expert task performance time. Assumes users are already experts — not suited for novices.

Question

What do the four letters in GOMS stand for?

Answer

G = Goals (what the user wants to accomplish); O = Operators (atomic actions, e.g., pressing a key); M = Methods (sequences of operators that achieve a goal); S = Selection Rules (criteria for choosing among competing methods).

Question

What are operators in a GOMS model?

Answer

The smallest, atomic actions a user can perform — the building blocks of methods. Examples: pressing a key, clicking a button, reading a display. Operators cannot be broken down further and are assigned a measurable cost (usually time) to enable performance predictions.

Question

What are selection rules in a GOMS model?

Answer

The criteria a user uses to decide which method to use in a given situation. Example: 'If my hands are full, I will use the keypad to disable the alarm. If my hands are free, I will use the keychain fob.' Selection rules capture conditional decision-making logic.

Question

What is the KLM (Keystroke-Level Model)?

Answer

A fine-grained GOMS variant that models individual physical actions (keystrokes, clicks, mouse movement, mental preparation) and assigns each a time value to predict how long an expert user takes to complete a task. Useful for numerically comparing interface designs.

Question

What is CPM-GOMS?

Answer

Critical Path Method GOMS — a GOMS variant that models parallel, concurrent tasks rather than strictly sequential ones. Useful when users do multiple cognitive and physical things simultaneously, such as driving while watching a navigation display.

Question

What is NGOMSL?

Answer

Natural GOMS Language — a natural-language form of GOMS that expresses the model in readable English-like statements. Easier for designers to interpret than formal GOMS notation. Particularly good at revealing when a user is being asked to hold too much in working memory.

Question

What is Cognitive Task Analysis (CTA)?

Answer

A task analysis method that focuses on what happens inside the user's head — memory, attention, cognitive load, and decision-making at each step. Corresponds to the predictor model of the user. Strengths: captures mental processes of experts. Weaknesses: time-consuming, not suited to novices, can miss contextual factors.

Question

What are the weaknesses of GOMS models?

Answer

(1) Assumes users are already experts — doesn't model novice behavior or errors; (2) Doesn't capture internal reasoning complexity; (3) Standard GOMS doesn't handle parallel tasks or complex sub-methods well (though CPM-GOMS and NGOMSL address some of this).

Question

What are the weaknesses of Cognitive Task Analysis?

Answer

(1) Incredibly time-consuming to perform — requires interviewing multiple experts; (2) Risks deemphasizing context (focuses too narrowly on internal mental processes, misses physical/social factors); (3) Not suited to novice users who lack well-formed mental models of their work.

Question

What is Distributed Cognition?

Answer

A theory that extends the unit of cognitive analysis from the individual mind to the entire system of people, artifacts, and their relationships. The system as a whole can perceive, remember, and decide in ways no individual part could do alone. Classic example: a cockpit's memory system (booklet + speed card + speed bugs) that no single pilot or artifact could replicate alone.

Question

How does the cockpit example illustrate distributed cognition? (Hutchins)

Answer

In a landing aircraft: the booklet of speed cards = long-term memory of the cockpit; the pulled speed card = short-term memory; the speed bugs on the speedometer = working memory. No single pilot or artifact alone 'remembers the speeds' — the entire system does. Cognition is distributed across people and artifacts.

Question

What is cognitive load and how does it relate to distributed cognition?

Answer

Cognitive load = the amount of mental processing capacity being used at once. Working memory is limited. Distributed cognition argues artifacts can offload cognitive tasks, distributing the load across the system. Example: a GPS reduces the cognitive load of navigation while driving; cruise control offloads speed tracking.

Question

What is Situated Action?

Answer

A theory arguing that human behavior is not driven by pre-formed plans and goals, but is improvised in response to immediate context. Goals are constructed retroactively to interpret past actions, not formed in advance. Contrasts with GOMS and activity theory, which assume stable goals drive behavior. Associated with Suchman.

Question

What is Activity Theory?

Answer

A theory that structures human activity around motives, goals, and operations. Artifacts are mediating tools. Key difference from distributed cognition: activity theory regards humans and artifacts as fundamentally different (humans have consciousness; artifacts don't). Key difference from situated action: activity is shaped by pre-formed motives and goals, not just immediate context.

Question

How do Activity Theory, Distributed Cognition, and Situated Action differ on goals?

Answer

Activity Theory and Distributed Cognition: both are driven by goals/motives — goals exist before and shape action. Situated Action: goals are de-emphasized; action is improvised from context, and goals are constructed retroactively to explain what already happened. (Per Nardi's comparison paper)

Question

How do Activity Theory and Distributed Cognition differ on artifacts?

Answer

Activity Theory: humans and artifacts are fundamentally DIFFERENT — humans have consciousness, artifacts do not. Distributed Cognition: artifacts can serve cognitive roles and should be considered conceptually equivalent to humans in the system. (Per Nardi's comparison paper)

Question

What is the unit of analysis in Distributed Cognition?

Answer

The entire socio-technical system — the combination of people, artifacts, and the relationships among them — rather than the individual mind. This is the central conceptual move of distributed cognition: zooming out from one person's brain to the whole system.

Question

Do artifacts have politics? (Winner)

Answer

Yes — according to Langdon Winner's 1980 essay. Artifacts can embody forms of power and authority either: (1) as inherently political technologies (require certain political structures by design, e.g., nuclear power requires top-down control) or (2) as technical arrangements used to achieve political goals (e.g., Robert Moses's intentionally low bridges).

Question

What is the Robert Moses bridge example?

Answer

Robert Moses, NYC city planner, built Long Island parkway overpasses too low for buses (9 ft clearance). 12-ft buses couldn't pass through. This excluded poor and minority populations (who relied on public transit) from his parks — without any explicit rule. The design itself encoded racial and class discrimination.

Question

What are the two types of political artifacts per Winner?

Answer

(1) Inherently political technologies: require certain political structures by design (nuclear power = centralized authority; solar power = decentralized). (2) Technical arrangements as forms of order: neutral technologies used to achieve political goals (Moses's bridges; McCormick's molding machines used to bust up a union).

Question

What is Value-Sensitive Design (VSD)?

Answer

A design methodology from the University of Washington that accounts for human values in a principled, systematic way throughout the design process. Goes beyond usability to ask: is this system consistent with users' values? Distinguishes usability (functional effectiveness) from values with moral import (privacy, fairness, autonomy).

Question

What are the three types of investigations in VSD?

Answer

(1) Conceptual investigations: thought experiments about what values are at stake and who the stakeholders are; (2) Empirical investigations: studying real users to understand how they experience and prioritize values; (3) Technical investigations: examining the system itself to assess whether it supports or undermines the target values.

Question

What is the difference between direct and indirect stakeholders?

Answer

Direct stakeholders: people who interact directly with the system (the primary users). Indirect stakeholders: people who don't use the system but are affected by it. Example: in a hospital records system, doctors/nurses = direct; patients = indirect (their data is stored but they never log in). VSD requires designing for both.

Question

What are the Three Goals of HCI?

Answer

(1) Design for usability — make tasks easier (e.g., GPS with fewest taps); (2) Design for research — use the interface to test hypotheses (e.g., speedometer visualization to study speed perception); (3) Design for change — shape behavior via values (e.g., seatbelt beep that serves no usability purpose but promotes safety).

Question

What is Privacy by Design?

Answer

A VSD application that builds privacy as a value into the system's architecture from the start, rather than as an afterthought. Related to the EU's 'Right to be Forgotten' law, which requires search engines to allow people to remove personal information — creating tension between privacy values and free speech values.

Question

What is 'designing for change'?

Answer

Intentionally using interface design to alter user behavior or societal values. Examples: Facebook's Like button (only allows positive interactions to reduce cyberbullying), car seatbelt beeps (discourages unsafe behavior). Can conflict with designing for usability. Can be used for positive or negative ends (e.g., Moses's bridges).

Question

What are the three models of the user's role in a system?

Answer

(1) Processor model: user as input-output machine (basis for GOMS); (2) Predictor model: user as active reasoner who predicts outcomes (basis for CTA); (3) Participant model: user as a member of a larger social/physical context (basis for Distributed Cognition, Activity Theory, Situated Action).

Question

Which task analysis method corresponds to the Processor Model?

Answer

GOMS (and KLM/CPM-GOMS/NGOMSL). These treat the user as an input-output machine, focusing on explicit actions without modeling internal reasoning. Best for well-defined, expert-level, predictable tasks.

Question

Which task analysis method corresponds to the Predictor Model?

Answer

Cognitive Task Analysis (CTA) / Hierarchical Task Analysis. These try to get inside the user's head — modeling memory, attention, and decision-making. Best for capturing expert mental processes and understanding complex or error-prone tasks.

Question

What theory corresponds to the Participant Model?

Answer

Distributed Cognition, Situated Action, and Activity Theory — all of which view the user as embedded in a broader context (social, physical, cultural, artifact-rich). Best for understanding how context shapes interaction.

Question

What is Evaluation in HCI?

Answer

The process of testing a design — with or without real users — to gather feedback and assess usability. Types: qualitative (early, formative), empirical (late, summative), and predictive (no users, rapid). All types feed back into the design life cycle. The goal shifts from 'what can we improve?' early on to 'can we prove it's better?' at the end.

Question

What is qualitative evaluation?

Answer

Evaluation focused on understanding users' experiences and perceptions through methods like interviews, think-aloud protocols, and focus groups. Answers 'What do you like? What confused you?' Best used early in the design process. Informs ongoing design decisions but cannot prove quantitative differences.

Question

What is empirical (quantitative) evaluation?

Answer

Controlled experiments that produce numeric results — task completion time, error rates, etc. Used to make objective, generalizable comparisons between designs. Best used late in design. It is the ONLY evaluation type that can identify provable advantages. Example: A/B testing, timed usability studies.

Question

What is predictive evaluation?

Answer

Evaluation done WITHOUT real users, where the evaluator simulates or predicts user behavior using models or heuristics. More efficient than user studies; useful for rapid feedback. Should only supplement — not replace — user evaluation. Methods include cognitive walkthroughs and heuristic evaluation.

Question

What is a Cognitive Walkthrough?

Answer

A predictive evaluation method where an evaluator steps through a task, simulating what a novice user sees, thinks, and does at each step. At each action, the evaluator asks: (1) Will the user know what to do? (2) Can they identify the correct action? (3) Will the feedback confirm success? Evaluated through the lens of the gulfs of execution and evaluation.

Question

What are the two phases of a Cognitive Walkthrough?

Answer

(1) Preparation phase: select representative tasks, describe the initial interface state, define the correct action sequence, describe anticipated users and their initial goals; (2) Evaluation phase: step through each action, analyzing what goals the user should have, whether the interface will induce the correct action, and how the user's goals change after each action.

Question

What is Heuristic Evaluation?

Answer

A predictive evaluation where 3–5 usability experts independently inspect an interface against a set of known heuristics and identify violations. Evaluators work alone, then findings are aggregated. Cheap, requires no users, no advance planning. Nielsen & Molich found individual evaluators find 20–51% of problems; aggregates of 3–5 work well.

Question

What are Nielsen's 9 Usability Heuristics?

Answer

(1) Simple and natural dialogue; (2) Speak the user's language; (3) Minimize user memory load; (4) Be consistent; (5) Provide feedback; (6) Provide clearly marked exits; (7) Provide shortcuts; (8) Give good error messages; (9) Prevent errors. Used as the benchmark for heuristic evaluation.

Question

How many evaluators does Nielsen recommend for Heuristic Evaluation, and why?

Answer

3–5 evaluators. Individual evaluators find only 20–51% of usability problems. Aggregating 3–5 evaluators captures the 'collected wisdom' of the group and dramatically improves coverage. Adding more than 5 evaluators yields diminishing returns — extra resources are better spent on alternative evaluation methods.

Question

What is the Think-Aloud Protocol?

Answer

An evaluation method where users verbalize their thoughts, goals, and confusion while using an interface. Gives evaluators direct insight into the user's mental model and reasoning. Used in qualitative evaluation to understand not just what users do but why — useful for diagnosing the root cause of errors.

Question

What is the Gulf of Execution?

Answer

The gap between what a user wants to do and the actions the interface makes available. A large gulf = the interface doesn't make it obvious what actions are possible or how to perform them, forcing the user to guess. In cognitive walkthroughs, the evaluator asks: 'Will the user know what to do?' to assess the gulf of execution.

Question

What is the Gulf of Evaluation?

Answer

The gap between the system's output after an action and the user's ability to interpret whether their goal was achieved. A large gulf = feedback is absent, unclear, or misleading. In cognitive walkthroughs, the evaluator asks: 'Will the feedback tell the user whether they succeeded?' to assess the gulf of evaluation.

Question

What is Formative Evaluation?

Answer

Evaluation conducted DURING the design process with the intention of improving the interface in the next iteration. Most evaluation in the design life cycle is formative. Corresponds to qualitative and predictive evaluation. Purpose: 'How can we make this better?'

Question

What is Summative Evaluation?

Answer

Evaluation conducted at the END of the design process to make a conclusive statement about performance. Corresponds to empirical evaluation. Purpose: 'Can we prove this design is better than the previous one?' Example: 'Our redesign reduced task time by 30%.' The course notes that ideally all evaluation is formative, since design never truly ends.

Question

What is A/B Testing?

Answer

A live evaluation method that presents two interface variants (A and B) to real users and measures which performs better on a defined metric. Statistically equivalent to a t-test. Practical when deployment costs are low (e.g., web pages). Example: comparing two checkout page layouts to see which results in more completed purchases.

Question

What is Live Prototyping?

Answer

Building and deploying an actual working interface instead of a traditional prototype, when construction is as easy as prototyping (e.g., drag-and-drop web tools like Optimizely). Appropriate when cost of failure is low and real user data is valuable. NOT appropriate for high-stakes systems (healthcare, aviation) or expensive hardware platforms.

Question

What are the 5 usability metrics used in empirical evaluation?

Answer

(1) Efficiency — time/actions to complete a task; (2) Accuracy — number of errors committed; (3) Learnability — how quickly a user reaches a defined level of expertise; (4) Memorability — how well users retain how to use the interface over time; (5) Satisfaction — user enjoyment and perceived cognitive load.

Question

What is reliability in evaluation?

Answer

Whether an assessment measure produces consistent results over repeated trials. Example: if you ask Amanda the time three times and she always says '2:30,' she is reliable. Without reliability, conclusions from evaluation are random and not meaningful.

Question

What is validity in evaluation?

Answer

Whether an assessment measure accurately captures what it's supposed to measure. An assessment can be reliable but not valid. Example: Amanda always says '2:30' (reliable) but the actual time is 1:30 (not valid). In HCI, evaluation measures must be both reliable and valid.

Question

What is Agile Development?

Answer

A software development methodology emphasizing iterative short cycles (sprints), continuous delivery, rapid feedback, and adaptability. Natural fit with HCI's feedback cycle philosophy. Risk: it learns from real failures with real users — inappropriate for high-stakes domains. Chamberlain, Sharp & Maiden found significant overlap with User-Centered Design.

Question

What are the similarities between Agile Development and User-Centered Design?

Answer

Per Chamberlain, Sharp & Maiden: both rely on iterative development building on feedback from prior cycles; both emphasize heavy user involvement; both emphasize team cohesion. Key conflict: UCD emphasizes thorough documentation and pre-design research; Agile minimizes both.

Question

When is Agile Development appropriate? (Boehm & Turner)

Answer

Two conditions: (1) Low criticality — the cost of bugs/failures must be low (healthcare and finance = bad candidates; smartphone games and social media = good candidates); (2) Frequently changing requirements — Agile excels when requirements shift often, not for stable products like thermostats.

Question

What is the Parallel Track Method?

Answer

A strategy for integrating HCI into Agile: the design/research team works one SPRINT AHEAD of the development team. The HCI team does user research, prototyping, and low-fidelity evaluation in the current sprint, then hands findings to developers for their next sprint. Preserves UCD's research-first approach within an Agile timeline.

Question

What is the Design Life Cycle?

Answer

The iterative four-phase process of HCI design: (1) Needfinding — understand users and tasks; (2) Design alternatives — generate potential solutions; (3) Prototyping — build solutions at increasing fidelity; (4) Evaluation — test and gather feedback. Feeds back into the next cycle. Never truly ends.

Question

What is Needfinding?

Answer

The first phase of the design life cycle. The goal is to deeply understand users — who they are, what tasks they do, what their goals and pain points are, and what context they work in. Methods: interviews, direct observation, surveys, contextual inquiry. Produces the user model that informs all subsequent design decisions.

Question

What is the paper 'How a Cockpit Remembers Its Speeds' about?

Answer

A 1995 paper by Edwin Hutchins arguing that cognition in aviation is distributed across the cockpit system — not located in any individual pilot's brain. The booklet of speed cards, the selected card, and the speed bugs on the speedometer together form the system's long-term, short-term, and working memory for managing aircraft speed during descent.

Question

What did Nielsen & Molich find about individual heuristic evaluators?

Answer

Individual evaluators found only 20–51% of usability problems across four experiments. Even usability experts perform imperfectly. However, aggregating 3–5 evaluators dramatically improves coverage because different evaluators catch different problems. The collected wisdom of a group far exceeds any individual.

Question

What is the difference between heuristic evaluation and cognitive walkthrough?

Answer

Heuristic evaluation: experts check an interface against a list of usability heuristics — good for finding a broad range of violations. Cognitive walkthrough: evaluator steps through specific tasks simulating novice user cognition — focuses specifically on learnability and 'walk-up-and-use' ease. Both are predictive (no real users needed).

Question

What is the McCormick molding machine example (Winner)?

Answer

At Cyrus McCormick's Chicago factory in the 1880s, pneumatic molding machines were introduced that actually produced inferior castings at higher cost. Their real purpose: to replace unionized skilled workers with unskilled labor, busting the union. The technology served a political goal despite being technically inferior. Example of 'technical arrangements as forms of order.'

Question

What is the Facebook Like button example of 'designing for change'?

Answer

Facebook's Like button was intentionally designed to support ONLY positive interactions — despite the usability argument for a Dislike button. This design choice was meant to reduce cyberbullying and negativity. The later emoji reactions (Love, Haha, Wow, Sad, Angry) were also designed to maintain a positive/sympathetic tone even for negative emotions.

Question

What is the 'right to be forgotten' and how does it relate to VSD?

Answer

A law in the EU allowing individuals to request removal of personal information from online search results. A VSD example: privacy is a value the EU has designed into law. However, it conflicts with another value — free speech — creating tension. It also shows how values differ across cultures, requiring designers to sometimes build different systems for different regions.

Question

What is the difference between Distributed Cognition and Situated Action regarding persistent structures?

Answer

Distributed Cognition and Activity Theory: persistent structures (artifacts, institutions, cultural practices) are central to analysis — they shape activity across time. Situated Action: persistent structures present tension; the focus is on the immediate, unique situation. Nardi's paper identifies this as a key difference among the three theories.

Question

In the design life cycle, when should you use qualitative vs. empirical evaluation?

Answer

Qualitative evaluation: early in the cycle, when you need to understand user experience and improve iteratively. Empirical evaluation: late in the cycle, when you need to measure and prove improvement. Predictive evaluation: throughout, as a supplement when user access is limited. In practice: qualitative → empirical as fidelity increases.

Question

What makes a design a good candidate for Agile vs. traditional HCI approaches?

Answer

Good for Agile: low criticality (failure won't cause serious harm), frequently changing requirements, software-only deployment (easy to update), small team. Poor for Agile: high stakes (healthcare, aviation), hardware-dependent (can't push software updates to fix physical devices), stable requirements, large teams requiring strong order and documentation.

Question

What does Nardi say is the key difference between Activity Theory and Distributed Cognition?

Answer

Their treatment of the symmetry between people and artifacts. Activity Theory: humans and artifacts are FUNDAMENTALLY DIFFERENT — humans have consciousness and artifacts do not. Distributed Cognition: artifacts can serve cognitive roles and should be treated as conceptually equivalent to humans within the cognitive system.