AI/ML Infra Meetup | Scaling Experimentation Platform in Digital Marketplaces: Architecture, Implementation & Lessons Learned
Alluxio
733 views
12 slides
Sep 24, 2024
Slide 1 of 12
1
2
3
4
5
6
7
8
9
10
11
12
About This Presentation
AI/ML Infra Meetup
Aug. 29, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Koundinya Pidaparthi (VP of Analytics @ Poshmark)
Scaling experimentation in digital marketplaces is crucial for driving growth and enhancing user experiences. However, vari...
AI/ML Infra Meetup
Aug. 29, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Koundinya Pidaparthi (VP of Analytics @ Poshmark)
Scaling experimentation in digital marketplaces is crucial for driving growth and enhancing user experiences. However, varied methodologies and a lack of experiment governance can hinder the impact of experimentation leading to inconsistent decision-making, inefficiencies, and missed opportunities for innovation.
At Poshmark, we developed a homegrown experimentation platform, Lightspeed, that allowed us to make reliable and confident reads on product changes, which led to a 10x growth in experiment velocity and positive business outcomes along the way.
This session will provide a deep dive into the best practices and lessons learned from successful implementations of large-scale experiments. We will explore the importance of experimentation, overcome scalability challenges, and gain insights into the frameworks and technologies that enable effective testing.
Size: 1.68 MB
Language: en
Added: Sep 24, 2024
Slides: 12 pages
Slide Content
Scaling Experimentation @ Poshmark
Koundinya Pidaparthi, VP Analytics
Aug 2024
Poshmark is a leading fashion resale
marketplace powered by a vibrant, highly
engaged community of buyers and sellers and
real-time social experiences. Designed to make
online selling fun, more social and easier than
ever, Poshmark empowers its sellers to turn their
closet into a thriving business and share their
style with the world.
POSHMARK NETWORK & SCALE
Since its founding in 2011,
Poshmark has grown its
community to over 130
million users and
generated over $10
billion in GMV
shop from over
10k brands and
90 categories
EXPERIMENTATION IS CORE TO DELIVER QUALITY PRODUCT
Dynamic marketplace, rely on experimentation
and data to build tools, measure impact and
regression on metrics.
●Continuous Weekly Releases
●Varied Use cases
○Code Migrations
○New Feature Launches
○App Redesign
○Campaign Management
●100s of metrics monitored for regression
●200+ Internal Users of Experimentation
Platform
EXPERIMENTATION PLATFORM CHALLENGES
●Lack of consistent and reliable frameworks can
slow down decision making.
●High exposure can lead to too much risk.
●Low experiment concurrency can slow down
product velocity.
●Analytics resources are often tied up with
tedious workflows and repetitive analysis.
●Experiment operations and coordination can be
time consuming.
●Lack of shared learnings can lead to more cold
starts.
Our goal is to build a reliable experimentation
platform to accelerate product innovation, ship
high quality product, streamline experiment
processes and save time.
3 MAIN COMPONENTS OF OUR EXPERIMENTATION PLATFORM
Analysis and Decision
Framework
Analysis and Decision Framework
●Experiment Design Guidelines
○Clarify Goals
○Standardized Metrics Nomenclature
■North Star
■Feature Metrics
■Guardrail Metrics
○Sizing
○Standardized Stats Engine
●Standardized Operating Rhythm and
Decision Frameworks
○Readout Reviews
○Pre defined success criteria
~ Cut Experimentation Cycle time, achieved
3X Lift in Experimentation Volume!
Streamlined Experiment Operations - Analytics Architecture
●Lightspeed Metrics
○Standardized Business Logic
○Python Stats Package
○Pre Processed ETLs
●Knowledge Repo for collaborative reporting
○Templated Notebooks
○Converts Notebooks into user friendly readouts
○Centralized Learnings
●AB Console
○On Demand Experiment Scheduling
○Improved visibility to Tests on Platform
~ 20% to 30% Efficiency Gains for Analysts
Feature Flagging Enhancements
Segments
~ 10x increase in capacity to run concurrent experiments
●Created more layers and segments
○Increased Capacity to run more
concurrent experiments
○Allow for smaller exposure tests,
mitigate risk
○Support expanded Use cases,
Holdouts, Multivariate Tests
●Built assignment algorithm
○On demand assignment
○More confidence in randomization
Lessons Learned
●Actively listened to user feedback and built MVP to address key friction points which led to
immediate value and adoption of the platform.
○Standardization and consistency were key to build confidence.
○Reducing friction increases adoption
●Open data culture, knowledge sharing and healthy debate was critical in building a culture of
experimentation.
○Focus on learnings, every AB test is a success regardless of outcome.
○Continuous training and accountability are key to sustained success.
●Be open to new use cases for experimentation.
○Campaign management, infrastructure migrations, holdouts.
●Be strategic about investments. Continuous investments not always a good ROI due to competing
priorities and limited resources.