Project 1(Main Project – DNR Reduction and Process Optimisation)
Project Overview
A subset of Amazon’s concessions, Delivered Not Received Controllable(DNR-C) refers to shipments marked as delivered by the Delivery Associate(DA) and verified by operational scans, but which the customer claims never materialised. Approaches to reducing DNR-C DPMO vary massively by delivery station, and variances in the nature of data sharing
with Delivery Service Partners(DSPs) are evident. Objectives of this project were hence to review the DNR process endto-end, optimise the workflow and align UK practice.
Problem Context
After interacting with my assigned DSPs during concession-review meetings, investigating DA training resources and consulting a variety of project stakeholders, it presented that the main issue with respect to DNR-C was in-fact the metric being tracked to gauge DA liability. The reasoning behind this is that several situational factors can influence a customer’s proneness to claim concession, therefore it is unjust to take a high DNR-C DPMO as an indication of a driver’s adherence to delivery workflow and penalise them as a result of perceived non-conformance. Surplus issues lay in the lack of defined workflow for processing concessions in-station and liaising with DSPs on their associates’ performance. Due to this shortfall, remedial actions taken by both DSP and Amazon manager were subject
to personal biases and lenience. As third-party contractors, DSPs experience immense pressure to satisfy both AMZL and their employees(DAs). Given that pressure on their DAs is constantly rising in-line with metric targets, DSPs will require intensive support to coach their employees if driver attrition is to be suppressed.
Proposal & Benefits
The main deliverable of this project was a 2-part DA-level scorecard(termed DAPS) consisting of the aggregated metrics DA Compliance and Station Accountability. Constituent metrics were selected following in-depth investigation of the factors that influence the DNR variable and statistical analysis of the power/significance of each metric in predicting DNR. For DA Compliance, metric weighting was determined as a product of ‘Degree(DA) Controllable’ and ‘Business Impact’, with both measures ranging 1-5. Similarly, the constituent metrics of Station Accountability were weighted with respect to the product of ‘Degree(Station) Controllable’ and the metric’s predictor strength, rounded to
the nearest quintile. The products were then adjusted to reflect whether the metric would positively or negatively affect the Compliance/Accountability score and scaled to output a percentage value. Multiple parties were consulted in determining the metric weightings, and this is reflected in the final scorecards.
Implementation
A process flowchart of how this 2-part scorecard would be used in the Operations environment is shown below. In summary, full-detail versions of the scorecards(by DSP) are distributed to the parties that will make direct use of them along with DSPs themselves. Said parties then use the scorecard to identify concession instances where AMZL was likely
at fault and deep-dive these specifically rather than wasting time investigating non-controllable DA misconduct DNRs. Another responsibility of the Operations team would be to consult DSPs regarding underperforming DAs and ensure adherence to standard procedure.
The full-detail scorecard is also shared with the DSP teams who can thus review their drivers’ performance across a range of areas, contrast scores between fleets at other sites and monitor DAs who need unofficial coaching. The objective nature of performance meetings with AMZL management reduces the notion of unfair pressure/unattainable
standards, with DSPs instead feeling supported in their roles and having a more thorough understanding of the AMZL’s expectations.
Project 2 - PerfectMile User Research
Project Overview
PerfectMile Dashboard is a reporting solution for operators, corporate leadership and builders(BIE, Business Analysts) to visualize operational metrics and drive business performance. Overseen by the Last-Mile(LM) Technology team, PerfectMile is undergoing a UI renovation wherein its traditional, tabular dashboards of editable lists of metrics will be
replaced with categorised scorecards which can be clicked to deep-dive a metric, giving a more detailed view of the data. As an on-site Operations Intern with a technology background, I was asked to find a way of quantifying which metrics operators look at during a session and in what sequence. Additionally, I was to understand the motivation
behind user sessions such that PerfectMile 2.0’s dashboards can be tailored to individuals’ task needs.
Project Execution
Following a discussion with the Sr. Technical Program Manager responsible for the PerfectMile revamp, it became apparent that the sequencing of metrics during a use session need be data-driven by way of validation. Initial ideas of surveying the user base on their habits were dismissed for fear of not providing sufficiently in-depth or reliable accounts. I hence sought to automate the data collection process by implementing a Python eye-tracking program that executes when a user logs onto PerfectMile. This program detects where the user’s eyes rest on the screen and refreshes sequentially to gauge eye motion, returning an animated plot of user eye movement overlain onto a screenrecording of the PerfectMile session. A heatmap of the screen is also generated, which provides an indication of where the focal points of PerfectMile 1.0 lay. I had hopes to get the program to a point where it returned the raw timing data on metrics being viewed, however it transpired that standard laptop webcams weren’t accurate enough for this purpose and the metric-reading capacity of the program fell short. As a backup, I wrote a crawler to run through
the personal dashboards of each of the L4+ Operations Managers, extracting their saved metrics and the buckets into which these metrics fell. Along with some redshift-extracted data on PerfectMile use, this information indicates commonalities in metric classification and prioritisation across the user base. Findings were summarised in a PowerBI
dashboard and presented to the LastMile tech team along with any insights obtained.
Additional Endeavours
In addition to working on concessions, I was assigned the task of leading PHR(Preference Honour Rate) and Contact Compliance for the site. For context, a driver’s PHR is an indication of how well they adhere to customer delivery instructions, whilst Contact Compliance denotes how often the DA follows standard work and attempts to call/text the
customer when presented with delivery obstacles. This role involved tracking site PHR/CC week-to-week, assessing which DAs are underperforming and flagging these up with the relevant DSPs who then act accordingly.