Not Good Enough, But Try Again! The Impact of Improved Rejection Communications on Contributor Retention and Performance in Open Knowledge Collaboration
AleksiAaltonen
22 views
28 slides
May 12, 2024
Slide 1 of 28
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
About This Presentation
Presentation at the Tilburg School of Economics and Management on May 8, and at IÉSEG School of Management, Paris on May 3, 2024 on how Stack Overflow community question answering service tweaked its rejection notices to improve the retention of new contributors whose initial question is rejected ...
Presentation at the Tilburg School of Economics and Management on May 8, and at IÉSEG School of Management, Paris on May 3, 2024 on how Stack Overflow community question answering service tweaked its rejection notices to improve the retention of new contributors whose initial question is rejected (closed). The presentation is based on a paper coauthored with Sunil Wattal.
Size: 3.08 MB
Language: en
Added: May 12, 2024
Slides: 28 pages
Slide Content
May 8, 2024
Tilburg School of Economics and Management
Not Good Enough, But Try Again!
The Impact of Improved Rejection Communications on Contributor
Retention and Performance in Open Knowledge Collaboration
Aleksi Aaltonen
Temple University
Sunil Wattal
Temple University
PhD 2012
Moves, acquired
by Facebook 2014
Assistant Professor
2014–2018
Assistant Professor
2018–
Open
knowledge
collaboration
High-quality production in open knowledge collaboration
requires often rejecting contribution made in good faith.
Rejections demotivate especially new contributors from
attempting further contributions.
New Contributor Retention Is a Major Issue!
1.Systems must keep converting users into regular contributors to replace
those who stop contributing (Halavais and Lackaff 2008, Faraj et al. 2011,
Liu and Ram 2011).
2.Most new contributors never return after making their first contribution
(Arazy et al. 2016, Bayus 2013, Panciera et al. 2009, Piezunka and
Dahlander 2019).
3.The contributors are even less likely to make further contributions if their
initial contribution is rejected (Halfaker et al. 2013, Musicant et al. 2011).
Literature shows that there is a trade-off between quality
control and new contributor retention in open knowledge
collaboration.
We do not know how an open knowledge collaboration
system can mitigate the trade-off.
Literature
The regulation of behavior in online communities
A system should provide cues to help users to become better contributors as they
interact with the system (Ren et al. 2018).
Content removal notices are an under-utilized moderation practice in many systems
(Ahn et al. 2013, Jhaver et al. 2019, Piezunka and Dahlander 2019).
More informative rejection notices could encourage new contributors to try again
by reducing uncertainty about future interactions with the system (Fang and
Neufeld 2009, Pavlou et al. 2007, Shah 2006, White et al. 2007).
Organizational selection
The ways in which rejected applicants are treated can have important
implications on:
•Employee turnover (Dlugos and Keller 2021)
•Diversity (Bapna et al. 2022)
•Job performance post-hire (Konradt et al. 2017)
•The intentions of rejected applicants to reapply (Gilliland et al. 2001)
Positive outcomes are explained by the perceived fairness of
organizational selection.
Research Design
Data
Stack Overflow, the biggest community
question answering service for
programmers changed the wording of its
rejection notices on June 25, 2013 to
inform contributors better about the
reasons for their rejection.
We analyze 11,035 new contributors
who submitted their initial question with
±84 days sample window around the
treatment date (169 daily observations).
Authors’ names blinded for peer review
12 Article submitted toManagement Science;manuscriptno.
0
500
1,000
1,500
2,000
Apr May Jun Jul Aug Sep
2013
Initial Questions per Day
Figure 2 The number of initial questions asked per day in the sample window. The dashed vertical line marks
the treatment date.
slight limitation of the setting is that since the new rejection notices are not mapped one-to-one
to the old notices, we are not able to construe each pairing as a separate treatment but instead we
are forced to lump the notices together as old versus new notices.
The new rejection notices are intended to cover the same underlying rationale for closing a
question as the old ones, that is, the objective of the change was not to change the actual policy
but to communicate rejections better. This is important as our interest is specifically on rejection
communications. ‘Exact duplicate’ was first replaced by ‘Duplicate’ in early 2013 and the rest of
the notices were updated on June 25, 2013. We use the latter as ourtreatment dateafter which
all rejections are communicated with the new notices. To rule out other events confounding our
treatment, we study documents listed in a supplementary Table A.2 that include posts published
in a company blog and in so-called ‘meta’ sites that discuss the Stack Overflow service and, in
particular, a document called “Recent feature changes to Stack Exchange” that lists updates to
the platform.
OLD NOTICE
Not a real question. It's difficult to tell
what is being asked here. This question is
ambiguous, vague, incomplete, overly
broad, or rhetorical and cannot be
reasonably answered in its current form.
For help clarifying this question so that it
can be reopened, see the FAQ.
NEW NOTICE
Unclear what you are asking. Please
clarify your specific problem or add
additional details to highlight exactly what
you need. As it’s currently written, it’s
hard to tell exactly what you’re asking.
Manipulation check
We use a survey instrument with validated items: “Not informative at all...Very informative” (Sen and Lerman
2007) and ”I feel the decision to close the question was fair: Strongly disagree...Strongly agree” (Gilliland 1994).
104 valid mTurk subjects (programmers but not Stack Overflow users) scored five rejection notices randomly
drawn from the old and new notices and answered background questions including an attention check.
Robustness checks: i) including current Stack Overflow users, ii) old rejection notices shortened to a similar
length than the new ones, iii) test for a difference in the sentiment between the old and new notices.
Authors’ names blinded for peer review
Article submitted toManagement Science;manuscriptno. 47
Old rejection notices New rejection notices
Exact duplicate. This question covers exactly the
same content as earlier questions on this topic; its
answers may be merged with another identical
question.
Duplicate. This question has been asked before
and already has an answer. If those answers do not
fully address your question, please edit this
question to explain how it is different, or ask a new
question.
Off-topic. Questions on Stack Overflow are
expected to relate to programming or software
development within the scope defined in the FAQ.
Consider editing the question or leaving comments
for improvement if you believe the question can be
reworded to fit the scope. Read more about closed
questions here.
Off-topic. Stack Overflow is about programming,
but programming questions you’d solve on a
whiteboard or that ask what’s wrong with a large
block of code are no good.
Too localized. This question is unlikely to help
any future visitors; it is only relevant to a small
geographic area, a specific moment in time, or an
extraordinarily narrow situation that is not
generally applicable to the worldwide audience of
the internet. For help making this question more
broadly applicable, see the FAQ.
Not constructive. As it currently stands, this
question is not a good fit for our Q&A format. We
expect answers to be supported by facts, references,
or specific expertise, but this question will likely
solicit debate, arguments, polling, or extended
discussion. If you feel that this question can be
improved and possibly reopened, see the FAQ
guidance.
Too broad. There are either too many possible
answers, or good answers would be too long for
this format. Please add details to narrow the
answer set or to isolate an issue that can be
answered in a few paragraphs.
Not a real question. It's difficult to tell what is
being asked here. This question is ambiguous,
vague, incomplete, overly broad, or rhetorical and
cannot be reasonably answered in its current form.
For help clarifying this question so that it can be
reopened, see the FAQ.
Unclear what you are asking. Please clarify your
specific problem or add additional details to
highlight exactly what you need. As it’s currently
written, it’s hard to tell exactly what you’re asking.
Primarily opinion based. Many good questions
generate some degree of opinion based on expert
experience, but answers to this question will tend
to be almost entirely based on opinions, rather than
facts, references, or specific expertise.
Changed in early 2013
Changed on June 25, 2013 treatment date
Figure A.3 Old and new rejection notices.
Table A.5 The evaluation of the informativeness of the old and new rejection notices on a seven-point scale by Amazon
Mechanical Turk subjects.
Old rejection notices New rejection notices Welch t-test
Dimension Mean SD n Mean SD n Di ↵erence in means p-value
Informativeness5.143 (1.477) 105 5.568 (1.200) 95 0.425* 0.026
Fairness 5.349 (1.363) 109 5.336 (1.345) 107 -0.013 0.947
Notes.***p<0.001, **p<0.01, *p<0.05; SD, standard deviation
Contributor Retention (part 1)
•Regression discontinuity in time design (causal identification)
estimated using local polynomial regression in rdrobust R package
with data-driven bandwidths
•Survival model to assess the persistence of the treatment effect
Contributor Performance (part 2)
Mediation model to distinguish different mechanism by which the
increased informativeness may affect contribution quality and
productivity (estimated using PROCESS macro)
Dependent variables
RETENTION_RATE
The proportion of initially rejected contributors (whose first question is rejected) that submit another question
within 84 days from the rejection. Measured for each day.
MEAN_SCORE
The mean score for all questions submitted by contributors who submitted their initial question on the same
day (after the initial question). Measured for each day.
QUANTITY
The mean number of questions submitted by contributors who submitted their initial question on the same
day (after the initial question). Measured for each day.
Results and contributions
Contributor retention
Authors’ names blinded for peer review
Article submitted toManagement Science;manuscriptno. 49
0%
25%
50%
75%
100%
−84 −56 −28 0 28 56 84
Days from the treatment
Retention Rate After an Initial Rejection
Figure A.5 The figure follows a common practice in regression discontinuity design studies to plot a
higher-order polynomial fit to the entire sample to illustrate the discontinuity (Lee and Lemieux 2010). Here, we
use a fourth-degree polynomial with weekly bins to smooth any di↵erence between weekdays and weekends. We
can observe a slightly declining retention rate before the treatment date, a discrete jump at the treatment date
marked by a dashed vertical line, and then a relatively stable retention with some modest fluctuation. As such,
the figure suggests the presence of a clear treatment e↵ect. However, we stress that the plot must not be
confused with the actual estimation in the paper that is based on local regression around the treatment date. A
key reason for choosing the latter approach is that the curvature just before and especially right after the
treatment date indicate potential problems in estimating the critical boundary points using a higher-order global
polynomial regression (Hahn et al. 2001, Gelman and Imbens 2019).
Authors’ names blinded for peer review
18 Article submitted toManagement Science;manuscriptno.
Table 2 Tuning Parameter Values in the Base Scenario.
Tuning parameter Value(s)
Polynomial order 0|1
Bandwidth(s) variable
1
Kernel function uniform|triangular
Registration delay Max. 1 day
Closure delay Max. 14 days
Sample window ±84 days
Notes.
1
We use separate, data-driven MSE optimal bandwidths before and after the treatment date.
0,U
1,U 0,T
1,T
−0.25
0.00
0.25
0.50
0.75
1.00
1 2 3 4
Estimation ID
Base scenario
Average Treatment Effect (ATE)
Figure 3 Average treatment e↵ects in base scenario estimations including Bonferroni-corrected 95%
confidence intervals. The polynomial order (0 or 1) and kernel type (Uniform or Triangular) used in the estimation
is shown next to the point estimate.
such as weekends that have fewer new contributors. A plot approximating the daily retention rate
can be found in a supplemental Figure A.5.
Contributor retention
More informative rejection notices improve the retention of initially
rejected contributors by 21.7 percentage points (mean ATE).
Authors’ names blinded for peer review
Article submitted toManagement Science;manuscriptno. 19
Table 3 Estimation Results for the Base Scenario.
Estim. Tuning parameters
ID ATE SE 95% CI Poly. order Kernel Bandwidth before Bandwidth after
1 0.166 (0.033) 0.084 — 0.249 0 Uniform 13.7 5.3
2 0.219 (0.048) 0.101 — 0.338 1 Uniform 17.4 14.5
3 0.223 (0.032) 0.143 — 0.303 0 Triangular 11.8 7.4
4 0.261 (0.042) 0.156 — 0.366 1 Triangular 21.1 18.9
Notes.ATE, average treatment e↵ect; SE, standard error; CI, robust Bonferroni-corrected confidence interval (↵=0.0125)
We find that the average treatment e↵ect is statistically significant, positive, and of similar size in
all estimations that make up the base scenario. This suggests that more informative rejection notices
improve the retention of initially rejected contributors. Table 3 shows the average treatment e↵ect
(ATE), its standard error and robust, Bonferroni-corrected 95% confidence interval, and tuning
parameter values for each estimation. The mean average treatment e↵ect across the estimations
is 0.217 (SD 0.039), that is, we approximate that the retention of initially rejected contributors
increases by 21.7 percentage points due to the treatment. This is a large improvement in practice
that results from a small tweak to how rejections are communicated. For every ten initially rejected
contributors, two more will try asking another question because of the more informative rejection
notices. The estimations using a triangular kernel (marked with ‘T’ in Figure 3) result in slightly
larger estimates of the e↵ect, but overall the results are very consistent across the set of estimations
and provide strong support for H1.
5.4. The Persistence of the Treatment E↵ect
In Section 5.1 we defined a retained contributor somewhat arbitrarily as a contributor who asks
a second question within 84 days from the initial rejection. We now loosen this restriction and
perform a survival analysis to study how new contributors recover from the initial rejection before
and after the change to rejection notices. The analysis extends the results and shows that the results
are not driven by the chosen continuance threshold in the base scenario. To do this, we follow each
initially rejected contributor in our sample window for one year (365 days) after the rejection and
observe how many have asked a second question per each additional day. Contributors who did
not ask abither question within one year from the initial rejection are censored on the day 365.
Figure 4 shows a cumulative incidence plot confirming that our results are not an artifact of an
arbitrary continuance threshold parameter value. There is a clear di↵erence between the retention
rates of initially rejected contributors before (lower curve) and after the treatment (upper curve).
The continuance threshold in the base scenario (84 days) is appropriately located just right from
the shoulder after which the di↵erence between the curves stabilizes. The gap between the retention
Contributor retention
The difference in the proportion of
initially rejected contributors who
have submitted a second question
holds steady still one year after the
rejection (p=0.04).
The treatment effect does not
taper off over time.
Authors’ names blinded for peer review
20 Article submitted toManagement Science;manuscriptno.
+
+
0%
10%
20%
30%
40%
50%
0 100 200 300
Days after the initial rejection
Has submitted a second question
+
+
After treatment
Before treatment
Contributor Retention After the Initial Question Is Rejected
Figure 4 The proportion of initially rejected contributors who have asked a second question until 365 days
after the rejection. The dashed vertical line marks the 84-days continuance threshold used in the base scenario.
of contributors rejected before and after the treatment date holds steady until the right edge of
the plot, that is, one year after the initial rejection. We confirm the di↵erence between the groups
statistically by a log rank test that rejects the null hypothesis (p=0.04) that there is no di↵erence
between the retention of contributors before and after the treatment date. The findings add to the
robustness our results and show that the treatment e↵ect persists over time.
5.5. Robustness Checks
In this section, we examine assumptions underpinning our regression discontinuity design, check
the sensitivity of our findings against changes to tuning parameter values, and perform falsification
tests to rule out alternative explanations to our results (Imbens and Lemieux 2008). The checks
are summarized in Table 4 and together with the manipulation checks in Section 4.3 allow us to
attribute the increased retention to the more informative wording of the new rejection notices.
Contributor retention
Authors’ names blinded for peer review
Article submitted toManagement Science;manuscriptno. 21
Table 4 Robustness Checks.
Check Outcome
Sorting around the treatment date Contributors do not self-select into a treatment or control group
Covariate smoothness Covariates remain smooth over the treatment date
Bandwidth sensitivity
Seven-day bandwidth Treatment e↵ect remains significant and nearly unchanged
Double bandwidth Treatment e↵ect remains significant but is slightly reduced
Registration and closure delay sensitivityTreatment e↵ect does not change at di↵erent parameter values
Falsification tests
E↵ect on non-treated questions No statistically significant e↵ect on non-treated questions
Change in the rejection rate No statistically significant e↵ect on rejection rate
Capacity to detect a true discontinuity The discontinuity stands out from noise in the data
5.5.1. Sorting Around the Treatment Date.A threat to identification can arise if subjects
are able to sort around the cuto↵value of the running variable (Lee and Lemieux 2010). This would
mean that new contributors either speed up or hold o↵asking an initial question in order to be
treated di↵erently in the case the question is closed. This would seem unlikely. New contributors
hardly know in advance when they are going to encounter a programming problem for which
they need help and, even if they knew, most contributors hardly ask a question expecting that
it will be rejected. Nevertheless, we use an approach found in Cattaneo et al. (2019) to test for
a discontinuity in the density of the number of questions submitted by new contributors around
the treatment date.
13
We take the sequence of the number of daily initial questions in the sample
window, remove the values for weekend days (our treatment falls on Tuesday), and perform the
test on the remaining sequence as we do not want the fact that fewer initial questions are posted on
weekends to be confused with sorting. The test fails to reject the null hypothesis of no di↵erence in
the densities just before and after the treatment (p=0.09). However, since the p-value is relatively
small, we further inspect a supplementary Figure A.6 that shows the running variable density and
the number of initial questions per weekday, and find no evidence of sorting. We also aggregate the
number of initial questions on a weekly level, now also including weekend days, and perform the
test on the sequence of weekly observations. The test again fails to reject the null hypothesis of no
di↵erence between the densities before and after the treatment date (p=0.37). We thus conclude
that it is unlikely that contributors would sort around the treatment date.
13
Using time as the running variable makes McCrary (2008) density test unsuitable here.
Contributor performance
Authors’ names blinded for peer review
26 Article submitted toManagement Science;manuscriptno.
Improved
Informativeness
Retention
Performance
a b
c’
H1
H2a, H3a
Mediated effect: H2b, H3b
Figure 6 Theoretical Model.
onward to 1.Retentionis measured as the daily retention rate of initially rejected contributors as
described in Section 5.1.Performanceis measured along two dimensions that assess the quality
and quantity of questions from initially rejected contributors over their entire tenure in Stack
Overflow. For our quality measure, we use Stack Overflow question score that represents a collective
evaluation of question quality by community members and as such has been used in previous
studies (e.g., Ahn et al. 2013, Bregolin 2022, Srinivasan et al. 2019). The score is calculated by
Stack Overflow for each question by subtracting the number of downvotes from the number of
upvotes that the question has received from registered users. We take a mean question score
(excluding the initially rejected question) for each initially rejected but retained contributor who
asked their initial question on the same day, sum the scores, and divide the sum by the number
of such contributors, which gives us the daily measure.
14
For our quantity measure, we take the
mean number of questions (excluding the initially rejected question) asked by initially rejected
but retained contributors who asked their initial question on the same day.
15
Finally, we use the
number of initially rejected contributors, mean length of their questions, and the daily retention
rate for all new contributors, rejected or not, as controls.
14
The quality measure for daytin the sample window is defined as:
MEANSCORE t=
n
tP
i=1
m
t
iP
j=2
QuestionScore
t
i
j
m
t
i
nt
, where
ntis the number of initially rejected but retained contributors who asked their initial question on daytin the sample
window,mt
i
is the number of questions asked by theith such contributor, andQuestionScoret
i
j
is the score for the
jth question asked by theith such contributor.
15
The quantity measure for daytin the sample window is defined as:
QUANT IT Yt=
n
tP
i=1
(Questionst
i
N1)
nt
, where
ntis the number initially rejected but retained contributors who asked their initial question on daytin the sample
window, andQuestionst
i
is the total number of questions asked by theith such contributor. We subtract one from
this number to account for the initial question.
Retention
analysis
Selection mechanism
(mediated effect)
Performance improvement
mechanism (direct effect)
Measures:
MEAN_SCORE,
QUANTITY
Contributor performance
Authors’ names blinded for peer review
28 Article submitted toManagement Science;manuscriptno.
Table 6 Mediation Analysis Results.
Response Direct e ↵ect (c’) Indirect e ↵ect (ab)
variable CoeNcient SE 95% CI E ↵ect SE 95% bootstrap CI
MEANSCORE -23.703 (22.319) -67.777 — 20.370 (H2a) 5.331°(4.082) -0.567 — 15.121 (H2b)
QUANTITY -8.384 (12.881) -33.822 — 17.053 (H3a) 4.705* (2.712) 0.287 — 10.823 (H3b)
Notes.Indicative p-value equivalents: ***p<0.001, **p<0.01, *p<0.05,°p<0.1; SE, standard error; CI, confidence interval
6.2.1. Robustness Check.The literature on the regulation of behavior in online communities
suggests that contributors can be nudged to make better contributions (Piezunka and Dahlander
2019, Ren et al. 2018, Srinivasan et al. 2019), yet we find no evidence of quality improvement
triggered by more informative rejection notices. To ensure that we are not missing such an e↵ect,
we devise a further check to test if we can detect any quality improvement. To do this, we now
focus on the quality improvement from the initial question to the second question instead of the
mean quality of all questions over the contributor’s entire tenure in the system. While it is intuitive
that contributors may learn from an initial rejection and make better subsequent contributions
(Halfaker et al. 2011), we are interested whether they improvemoreafter the treatment. The idea is
that evidence of any additional improvement could be attributed to the more informative rejection
notices. We again use the model shown in Figure 6 and constructs defined in Section 6.1 except
for the response variable which is measured as the mean score improvement from the initially
rejected question to the contributor’s second question, for each day in the sample window.
18
The
results with respect to direct path (c
0
) and indirect path (ab) are summarized in a supplementary
Table A.12. Again, we find no evidence of quality improvement as the direct pathc’is statistically
insignificant forwSCORE (CoeNcient =N8.384, SE = 12.881, CI =N33.822N17.053).
7. Discussion
We motivated our study by drawing attention a trade-o↵between securing output quality and
converting users into regular contributors in open knowledge collaboration. To maintain the high
quality of its outputs, an open knowledge collaboration system must often reject contributions made
in good faith by new contributors. The literature tells that such rejections can negatively impact
the capacity of the system to renew its contributor base, since rejections demotivate newcomers
18
Immediate quality improvement for daytin the sample window is defined as:
NSCORE t=
n
tP
i=1
(Scoret
i
2
NScoret
i
1
)
nt
, where
ntis the number of initially rejected but retained contributors who asked their initial question on daytin the sample
window,Scoret
i
2
is the score for the second question from theith such contributor, andScoret
i
1
is the score for the
initial question from theith such contributor.
More informative rejection notices improve the retention of
contributors (selection mechanism) who are more productive (submit
more questions over their entire tenure in the system) and maybe also
higher quality questions.
Summary of results
1.A minor change to the wording of the rejection notices that reduced the
uncertainty of outcome if the rejected contributor tries again resulted in a
substantial increase in the retention of initially rejected contributors.
2.The impact on retention is long-lasting; it does not taper off as time passes
from the initial rejection.
3.The newly retained contributors are more productive as they submit more and
possibly better-quality questions.
Contributions
1.We extend earlier results on contributor retention by theoretically explaining
the improved retention with the reduced uncertainty about future interactions
(Jhaver et al. 2019, Piezunka and Dahlander 2019, Srinivasan et al. 2019).
2.We identify a selection mechanism by which the more informative rejection
notices improve also contributor performance.
3.We offer a template for studying the impact of rejection communications in
other open knowledge collaboration systems.
Thank You!
Base scenario tuning parameters
Authors’ names blinded for peer review
18 Article submitted toManagement Science;manuscriptno.
Table 2 Tuning Parameter Values in the Base Scenario.
Tuning parameter Value(s)
Polynomial order 0|1
Bandwidth(s) variable
1
Kernel function uniform|triangular
Registration delay Max. 1 day
Closure delay Max. 14 days
Sample window ±84 days
Notes.
1
We use separate, data-driven MSE optimal bandwidths before and after the treatment date.
0,U
1,U 0,T
1,T
−0.25
0.00
0.25
0.50
0.75
1.00
1 2 3 4
Estimation ID
Base scenario
Average Treatment Effect (ATE)
Figure 3 Average treatment e↵ects in base scenario estimations including Bonferroni-corrected 95%
confidence intervals. The polynomial order (0 or 1) and kernel type (Uniform or Triangular) used in the estimation
is shown next to the point estimate.
such as weekends that have fewer new contributors. A plot approximating the daily retention rate
can be found in a supplemental Figure A.5.
The Data Studies Bibliography is a curated, searchable bibliography of
papers that focus on data as an object of research. The bibliography is
available at https://DataStudiesBibliography.org and maintained by
Aleksi Aaltonen (Temple University) and Marta Stelmaszak (Portland
State University).
Just added!
Presentation history
DateInstitution / eventTitle
May 8, 2024Tilburg School of Economics and ManagementNot Good Enough, But Try Again! The Impact of Improved Rejection
Communications on Contributor Retention and Performance in Open
Knowledge Collaboration
May 3, 2024ÍESEG School of Management, ParisNot Good Enough, But Try Again! The Impact of Improved Rejection
Communications on Contributor Retention and Performance in Open
Knowledge Collaboration
February 11, 2022Fox School of Business (MIS Distinguished
Speaker Series)
Not Good Enough But Try Again! Mitigating the Impact of Rejections on
New Contributor Retention in Open Knowledge Collaboration
December 3, 2021University of MiamiNot Good Enough But Try Again! Mitigating the Impact of Rejections on
New Contributor Retention in Open Knowledge Collaboration
October 24, 2021CIST, Los AngelesRejecting and Retaining New Contributors in Open Knowledge
Collaboration: A Natural Experiment in Stack Overflow
June 16, 2020European Conference on Information Systems
(ECIS), online presentation
Rejecting and Retaining New Contributors in Open Knowledge
Collaboration: A Natural Experiment in Stack Overflow Q&A Service