Back to Course

Get Smart with Salesforce Einstein

0% Complete
0/0 Steps
  1. Salesforce Einstein Basics

    Get Started with Einstein
  2. Get Started with Einstein
    7 Topics
  3. Learn About Einstein Out-Of-The-Box Applications
    7 Topics
  4. Responsible Creation of Artificial Intelligence
    Use the Einstein Platform
    9 Topics
  5. Understand the Ethical Use of Technology
    8 Topics
  6. Learn the Basics of Artificial Intelligence
    5 Topics
  7. Recognize Bias in Artificial Intelligence
    6 Topics
  8. Einstein Bots Basics
    Remove Bias from Your Data and Algorithms
    6 Topics
  9. Learn About Einstein Bots
    6 Topics
  10. Plan Your Bot Content
    4 Topics
  11. Einstein Next Best Action
    Learn the Prerequisites and Enable Einstein Bots
    3 Topics
  12. Get Started with Einstein Next Best Action
    9 Topics
  13. Sales Cloud Einstein
    Understand How Einstein Next Best Action Works
    7 Topics
  14. Increase Sales Productivity
    5 Topics
  15. Automate Sales Activities
    5 Topics
  16. Target the Best Leads
    3 Topics
  17. Close More Deals
    6 Topics
  18. Connect with Your Customers and Create New Business
    4 Topics
  19. Sales Cloud Einstein Rollout Strategies
    Improve Sales Predictions
    4 Topics
  20. Use AI to Improve Sales
  21. Start with a Plan
  22. Set Goals and Priorities
  23. Get Ready for Einstein
  24. Quick Start: Einstein Prediction Builder
    Start Using Sales Cloud Einstein
  25. Sign Up for an Einstein Prediction Builder Trailhead Playground
  26. Create a Formula Field to Predict
  27. Enrich Your Prediction
  28. Build a Prediction
  29. Quick Start: Einstein Image Classification
    Create a List View for Your Predictions
  30. Get an Einstein Platform Services Account
  31. Get the Code
  32. Create a Remote Site
  33. Create the Apex Classes
  34. Einstein Intent API Basics
    Create the Visualforce Page
  35. Get Started with Einstein Language
  36. Set Up Your Environment
  37. Create the Dataset
  38. Train the Dataset and Create a Model
  39. Put Predictions into Action with Next Best Action
    Use the Model to Make a Prediction
  40. Learn the Basics and Set Up a Custom Playground
  41. Define and Build a Prediction
  42. Customize Your Contact and List Displays
  43. Create Recommendations for Einstein Next Best Action
  44. Create a Next Best Action Strategy
  45. Add Next Best Action to Your Contacts
Lesson Progress
0% Complete

Think of a bank using AI to predict whether an applicant will repay a loan. If the system predicts that the applicant will be able to repay the loan but they don’t, it’s a false positive, or type 1 error. If the system predicts the applicant won’t be able to repay the loan but they do, that’s a false negative, or type 2 error. Banks want to grant loans to people they are confident can repay them. To minimize risk, their model is inclined toward type 2 errors. Even so, false negatives harm applicants the system incorrectly judges as unable to repay.

Association Bias

Association Bias

When data are labeled according to stereotypes, it results in association bias. Search most online retailers for “toys for girls" and you get an endless assortment of cooking toys, dolls, princesses, and pink. Search “toys for boys," and you see superhero action figures, construction sets, and video games.

Confirmation Bias 

Confirmation bias labels data based on preconceived ideas. The recommendations you see when you shop online reflect your purchasing habits, but the data influencing those purchases already reflect what people see and choose to buy in the first place. You can see how recommendation systems reinforce stereotypes. If superheroes don’t appear on a website’s ‘toys for girls” section, a shopper is unlikely to know they’re elsewhere on the site, much less purchase them.

Automation Bias 

Automation bias imposes a system’s values on others. Take for instance, a beauty contest judged by AI in 2016. The goal was to declare the most beautiful women with some notion of objectivity. But the AI in question judged by Western beauty standards that emphasis whiteness. In the end, the vast majority of the winners were white.

Automation bias isn't limited to AI. Take the history of color photography. Starting in the mid-1950s, Kodak provided photo labs that developed their film with an image of a fair-skinned employee named Shirley Page that was used to calibrate skin tones, shadows, and light. While different models were used over time, the images became known as "Shirley cards." Shirley's skin tone, regardless of who she was (and she was initially always white) was considered standard. As Lorna Roth, a media professor at Canada's Concordia University told NPR, when the cards were first created, "the people who were buying cameras were mostly Caucasian people. And so I guess they didn't see the need for the market to expand to a broader range of skin tones." In the 1970s, they started testing on a variety of skin tones and made multiracial Shirley cards.

Societal Bias 

Societal bias reproduces the results of past prejudice toward historically marginalized groups. Consider redlining. In the 1930s, a federal housing policy color-coded certain neighborhoods in terms of desirability. The ones marked in red were considered hazardous. The banks often denied access to low-cost home lending to minority groups residents of these red marked neighborhoods. To this day, redlining has influenced the racial and economic makeup of certain zip codes, so that zip codes can be a proxy for race. If you include zip codes as a data point in your model, depending on the use case you could inadvertently be incorporating race as a factor in your algorithm’s decision-making. Remember that it is also illegal in the US to use protected categories like age, race, or gender in making many financial decisions.

Survival or Survivorship Bias

Sometimes, an algorithm focuses on the results of those were selected, or who survived a certain process, at the expense of those who were excluded. Let’s look at hiring practices. Imagine that you’re the hiring director of a company, and you want to figure out whether you should recruit from a specific university. You look at current employees hired from such-and-such university. But what about the candidates that weren’t hired from that university, or who were hired and subsequently let go? You see the success of only those who “survived.”

Survivorship Bias

Interaction Bias

Humans create interaction bias when they interact with or intentionally try to influence AI systems and create biased results. An example of this is when people intentionally try to teach chatbots bad language.