Design Engineering
Showcase 2021

Amazon

Tags
Operations
Supply Chain
Big Data
Management
Logistics

Project Details

Student
Molly Morrell
Team
Logistics
Supervisor
Dr Sam Cooper
Role
Operations Intern
Sector
Retail and Consumer Goods
Links
Email
Portfolio

Interning with Amazon Logistics furthered my understanding of how large-scale supply chains are managed from the ground up. In an industry where the key offerings are intangible measures of efficiency and customer experience, I engaged with the frameworks and KPIs leveraged by companies to foster continued success. During my placement, I was able to employ principles of Lean Six Sigma to minimise wasted time and labour, culminating in projects such as the DA Performance Scorecards. I have broadened my technical skillset with respect to programming in Python 3, data handling with MySQL, dashboarding in PowerBI/ QuickSight, and employing AWS tools.

Amazon

Demonstration of Design Engineering Thinking and Skills

Project 1 centred around promoting the use of a broader range of AMZL data in handling concessions – to better inform and standardise the managerial approach to liaising with Amazon’s Delivery Service Providers(DSPs) and establishing remedial actions. Through this, I deepened my working knowledge of PowerBI for descriptive analysis of concession data and gauging correlation/causation between DNR variables, alongside big data processing in python. I wrote several Python crawlers to access the secure sites where shipment data was displayed and extract it to a readable format, prior to learning about AWS data warehousing and the platforms used to read and establish direct connections to the required tables. Furthermore, I developed a basic understanding of AWS’ data tools(namely Redshift, SageMaker, Quicksight, Firehose & EC2), and built upon a foundational understanding of data querying in SQL.

I sought to contextualise DNR/concessions through gaining an understanding of the delivery workflow, speaking to the relevant stakeholders and undertaking analysis of relevant concession data to establish related metrics and their impact on DNR. From a surface-level understanding of the delivery workflow, it became easier to determine flaws with the

current approach to handling concessions, and the areas in which DAs struggle to meet AMZL’s expectations. Consulting DSPs about the nature of the data they receive and how they think Amazon translates DA Performance data into actions indicated a breakdown in process logic. Proposed frameworks for weekly DA scorecards(to be used in-station during concessions meetings and for DA development more generally) were met positively by several DSPs, with members of Ops(both onsite and from other Delivery Stations), LMDX and DA Training/DSP Support vested in the project’s progress. Constituent metrics and weightings were reviewed by numerous parties and amended as seen fit, with the LastMile technology team guiding the process for justifying different metrics’ inclusion and validating the final scorecard prototype.

Effective business communications were upheld in-person, on-call and via instant messenger(Chime) throughout the internship. I became increasingly confident reaching out to AMZL employees over message/email, regardless of seniority, with queries relating to their work and certain aspects of my project where relevant. Scheduled meetings with

said individuals were approached with an itinerary to guide conversations and prevent wasting people’s time, with follow-ups often being carried out over Chime to aid flexibility. I routinely engaged with wider-circle AMZL calls such as DSP Roundtables, Monthly Tech SME Calls and Intern Spotlight presentations to learn about focuspoints/developments in the broader community. I successfully developed an understanding of different management styles by working on-site and built some meaningful relationships with co-workers.

PHR/CC TRACKING SPREADSHEET
PERFECTMILE USE INSIGHT DASHBOARD SNIPPET
HEATMAP OUTPUT FROM EYE TRACKING PROGRAM
PROCESS FLOWCHART FOR DAPS' IMPLEMENTATION

Role and Contributions

Project 1(Main Project – DNR Reduction and Process Optimisation)

Project Overview

A subset of Amazon’s concessions, Delivered Not Received Controllable(DNR-C) refers to shipments marked as delivered by the Delivery Associate(DA) and verified by operational scans, but which the customer claims never materialised. Approaches to reducing DNR-C DPMO vary massively by delivery station, and variances in the nature of data sharing

with Delivery Service Partners(DSPs) are evident. Objectives of this project were hence to review the DNR process endto-end, optimise the workflow and align UK practice.

Problem Context

After interacting with my assigned DSPs during concession-review meetings, investigating DA training resources and consulting a variety of project stakeholders, it presented that the main issue with respect to DNR-C was in-fact the metric being tracked to gauge DA liability. The reasoning behind this is that several situational factors can influence a customer’s proneness to claim concession, therefore it is unjust to take a high DNR-C DPMO as an indication of a driver’s adherence to delivery workflow and penalise them as a result of perceived non-conformance. Surplus issues lay in the lack of defined workflow for processing concessions in-station and liaising with DSPs on their associates’ performance. Due to this shortfall, remedial actions taken by both DSP and Amazon manager were subject

to personal biases and lenience. As third-party contractors, DSPs experience immense pressure to satisfy both AMZL and their employees(DAs). Given that pressure on their DAs is constantly rising in-line with metric targets, DSPs will require intensive support to coach their employees if driver attrition is to be suppressed.

Proposal & Benefits

The main deliverable of this project was a 2-part DA-level scorecard(termed DAPS) consisting of the aggregated metrics DA Compliance and Station Accountability. Constituent metrics were selected following in-depth investigation of the factors that influence the DNR variable and statistical analysis of the power/significance of each metric in predicting DNR. For DA Compliance, metric weighting was determined as a product of ‘Degree(DA) Controllable’ and ‘Business Impact’, with both measures ranging 1-5. Similarly, the constituent metrics of Station Accountability were weighted with respect to the product of ‘Degree(Station) Controllable’ and the metric’s predictor strength, rounded to

the nearest quintile. The products were then adjusted to reflect whether the metric would positively or negatively affect the Compliance/Accountability score and scaled to output a percentage value. Multiple parties were consulted in determining the metric weightings, and this is reflected in the final scorecards.

Implementation

A process flowchart of how this 2-part scorecard would be used in the Operations environment is shown below. In summary, full-detail versions of the scorecards(by DSP) are distributed to the parties that will make direct use of them along with DSPs themselves. Said parties then use the scorecard to identify concession instances where AMZL was likely

at fault and deep-dive these specifically rather than wasting time investigating non-controllable DA misconduct DNRs. Another responsibility of the Operations team would be to consult DSPs regarding underperforming DAs and ensure adherence to standard procedure.

The full-detail scorecard is also shared with the DSP teams who can thus review their drivers’ performance across a range of areas, contrast scores between fleets at other sites and monitor DAs who need unofficial coaching. The objective nature of performance meetings with AMZL management reduces the notion of unfair pressure/unattainable

standards, with DSPs instead feeling supported in their roles and having a more thorough understanding of the AMZL’s expectations.

Project 2 - PerfectMile User Research

Project Overview

PerfectMile Dashboard is a reporting solution for operators, corporate leadership and builders(BIE, Business Analysts) to visualize operational metrics and drive business performance. Overseen by the Last-Mile(LM) Technology team, PerfectMile is undergoing a UI renovation wherein its traditional, tabular dashboards of editable lists of metrics will be

replaced with categorised scorecards which can be clicked to deep-dive a metric, giving a more detailed view of the data. As an on-site Operations Intern with a technology background, I was asked to find a way of quantifying which metrics operators look at during a session and in what sequence. Additionally, I was to understand the motivation

behind user sessions such that PerfectMile 2.0’s dashboards can be tailored to individuals’ task needs.

Project Execution

Following a discussion with the Sr. Technical Program Manager responsible for the PerfectMile revamp, it became apparent that the sequencing of metrics during a use session need be data-driven by way of validation. Initial ideas of surveying the user base on their habits were dismissed for fear of not providing sufficiently in-depth or reliable accounts. I hence sought to automate the data collection process by implementing a Python eye-tracking program that executes when a user logs onto PerfectMile. This program detects where the user’s eyes rest on the screen and refreshes sequentially to gauge eye motion, returning an animated plot of user eye movement overlain onto a screenrecording of the PerfectMile session. A heatmap of the screen is also generated, which provides an indication of where the focal points of PerfectMile 1.0 lay. I had hopes to get the program to a point where it returned the raw timing data on metrics being viewed, however it transpired that standard laptop webcams weren’t accurate enough for this purpose and the metric-reading capacity of the program fell short. As a backup, I wrote a crawler to run through

the personal dashboards of each of the L4+ Operations Managers, extracting their saved metrics and the buckets into which these metrics fell. Along with some redshift-extracted data on PerfectMile use, this information indicates commonalities in metric classification and prioritisation across the user base. Findings were summarised in a PowerBI

dashboard and presented to the LastMile tech team along with any insights obtained.

Additional Endeavours

In addition to working on concessions, I was assigned the task of leading PHR(Preference Honour Rate) and Contact Compliance for the site. For context, a driver’s PHR is an indication of how well they adhere to customer delivery instructions, whilst Contact Compliance denotes how often the DA follows standard work and attempts to call/text the

customer when presented with delivery obstacles. This role involved tracking site PHR/CC week-to-week, assessing which DAs are underperforming and flagging these up with the relevant DSPs who then act accordingly.

Summary

The placement with Amazon Logistics has grown my skills in a wide range of areas, and I felt I have had a positive impact on my place of work. With further development in my technical coding skills, knowledge of data warehousing and processing and technical management, I will be more confident in my position for future employment opportunities as I continue to grow as a Design Engineer.