Get Smart with Salesforce Einstein
-
Salesforce Einstein Basics
Get Started with Einstein -
Get Started with Einstein7 Topics
-
Learning Objectives - Einstein
-
AI Basics and Smart Assistants
-
I Have AI and Smart Assistants Down. How Does Salesforce Einstein Fit In?
-
But How Can Einstein Specifically Benefit My Business?
-
So What Makes Einstein Different?
-
Do I Have to Be a Genius to Use This? I’m Pretty Sure Einstein Was a Genius
-
The Time Is Now for Salesforce Einstein
-
Learning Objectives - Einstein
-
Learn About Einstein Out-Of-The-Box Applications7 Topics
-
Responsible Creation of Artificial IntelligenceUse the Einstein Platform9 Topics
-
Understand the Ethical Use of Technology8 Topics
-
Learn the Basics of Artificial Intelligence5 Topics
-
Recognize Bias in Artificial Intelligence6 Topics
-
Einstein Bots BasicsRemove Bias from Your Data and Algorithms6 Topics
-
Learn About Einstein Bots6 Topics
-
Plan Your Bot Content4 Topics
-
Einstein Next Best ActionLearn the Prerequisites and Enable Einstein Bots3 Topics
-
Get Started with Einstein Next Best Action9 Topics
-
Learning Objectives - Einstein Next Best Action
-
Rise of Business Intelligence
-
A Wealth of Insights Brings a New Set of Challenges
-
Better Recommendations with Einstein Next Best Action
-
Unify Sources of Insight
-
Connect Recommendations to Automation
-
Surface Actionable Intelligence
-
Applications for Different Lines of Business
-
How Can I Get Einstein Next Best Action?
-
Learning Objectives - Einstein Next Best Action
-
Sales Cloud EinsteinUnderstand How Einstein Next Best Action Works7 Topics
-
Increase Sales Productivity5 Topics
-
Automate Sales Activities5 Topics
-
Target the Best Leads3 Topics
-
Close More Deals6 Topics
-
Connect with Your Customers and Create New Business4 Topics
-
Sales Cloud Einstein Rollout StrategiesImprove Sales Predictions4 Topics
-
Use AI to Improve Sales
-
Start with a Plan
-
Set Goals and Priorities
-
Get Ready for Einstein
-
Quick Start: Einstein Prediction BuilderStart Using Sales Cloud Einstein
-
Sign Up for an Einstein Prediction Builder Trailhead Playground
-
Create a Formula Field to Predict
-
Enrich Your Prediction
-
Build a Prediction
-
Quick Start: Einstein Image ClassificationCreate a List View for Your Predictions
-
Get an Einstein Platform Services Account
-
Get the Code
-
Create a Remote Site
-
Create the Apex Classes
-
Einstein Intent API BasicsCreate the Visualforce Page
-
Get Started with Einstein Language
-
Set Up Your Environment
-
Create the Dataset
-
Train the Dataset and Create a Model
-
Put Predictions into Action with Next Best ActionUse the Model to Make a Prediction
-
Learn the Basics and Set Up a Custom Playground
-
Define and Build a Prediction
-
Customize Your Contact and List Displays
-
Create Recommendations for Einstein Next Best Action
-
Create a Next Best Action Strategy
-
Add Next Best Action to Your Contacts
Participants 106154
Think of a bank using AI to predict whether an applicant will repay a loan. If the system predicts that the applicant will be able to repay the loan but they don’t, it’s a false positive, or type 1 error. If the system predicts the applicant won’t be able to repay the loan but they do, that’s a false negative, or type 2 error. Banks want to grant loans to people they are confident can repay them. To minimize risk, their model is inclined toward type 2 errors. Even so, false negatives harm applicants the system incorrectly judges as unable to repay.
Association Bias
When data are labeled according to stereotypes, it results in association bias. Search most online retailers for “toys for girls" and you get an endless assortment of cooking toys, dolls, princesses, and pink. Search “toys for boys," and you see superhero action figures, construction sets, and video games.
Confirmation Bias
Confirmation bias labels data based on preconceived ideas. The recommendations you see when you shop online reflect your purchasing habits, but the data influencing those purchases already reflect what people see and choose to buy in the first place. You can see how recommendation systems reinforce stereotypes. If superheroes don’t appear on a website’s ‘toys for girls” section, a shopper is unlikely to know they’re elsewhere on the site, much less purchase them.
Automation Bias
Automation bias imposes a system’s values on others. Take for instance, a beauty contest judged by AI in 2016. The goal was to declare the most beautiful women with some notion of objectivity. But the AI in question judged by Western beauty standards that emphasis whiteness. In the end, the vast majority of the winners were white.
Automation bias isn't limited to AI. Take the history of color photography. Starting in the mid-1950s, Kodak provided photo labs that developed their film with an image of a fair-skinned employee named Shirley Page that was used to calibrate skin tones, shadows, and light. While different models were used over time, the images became known as "Shirley cards." Shirley's skin tone, regardless of who she was (and she was initially always white) was considered standard. As Lorna Roth, a media professor at Canada's Concordia University told NPR, when the cards were first created, "the people who were buying cameras were mostly Caucasian people. And so I guess they didn't see the need for the market to expand to a broader range of skin tones." In the 1970s, they started testing on a variety of skin tones and made multiracial Shirley cards.
Societal Bias
Societal bias reproduces the results of past prejudice toward historically marginalized groups. Consider redlining. In the 1930s, a federal housing policy color-coded certain neighborhoods in terms of desirability. The ones marked in red were considered hazardous. The banks often denied access to low-cost home lending to minority groups residents of these red marked neighborhoods. To this day, redlining has influenced the racial and economic makeup of certain zip codes, so that zip codes can be a proxy for race. If you include zip codes as a data point in your model, depending on the use case you could inadvertently be incorporating race as a factor in your algorithm’s decision-making. Remember that it is also illegal in the US to use protected categories like age, race, or gender in making many financial decisions.
Survival or Survivorship Bias
Sometimes, an algorithm focuses on the results of those were selected, or who survived a certain process, at the expense of those who were excluded. Let’s look at hiring practices. Imagine that you’re the hiring director of a company, and you want to figure out whether you should recruit from a specific university. You look at current employees hired from such-and-such university. But what about the candidates that weren’t hired from that university, or who were hired and subsequently let go? You see the success of only those who “survived.”
Interaction Bias
Humans create interaction bias when they interact with or intentionally try to influence AI systems and create biased results. An example of this is when people intentionally try to teach chatbots bad language.

