Reproducible Network Research With High-­Fidelity Emulation

amranharoon2 10 views 65 slides Aug 26, 2024
Slide 1
Slide 1 of 65
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65

About This Presentation

Reproducible Network Research With High-­Fidelity Emulation

- Stanford University
 


Slide Content

Reproducible,Network,Research,
With,High7Fidelity,Emula<on,
Nikhil&Handigol
+
,&Brandon,Heller
+
,&Bob&Lantz
*
,&
Vimal&Jeyakumar
+
,&Nick&McKeown
+&
+
Stanford&University,&Palo&Alto,&USA&
*
Open&Networking&Laboratory,&Palo&Alto,&USA&
2&

The&scienKfic&method&says:&
experiments&are&only&valid&if&they&can&be&
reproduced.(
The&norm&in&physics,&medicine,&etc..&
But&what&about&the&computaKonal&sciences?&
3&

&D.L.&Donoho,&1995:&
“An(ar-cle(about(computa-onal(science(is(not(the(
scholarship(itself,(it(is(merely(adver-sing(of(the(
scholarship.(The(actual(scholarship(is(the(complete(
so<ware(development(environment(and(the(
complete(set(of(instruc-ons(which(generated(the(
figures.”(
This&is&what&network&research&should&be.&
It&should&be&reproducible.&
Papers&should&be&runnable."
4&

(a) Benchmark tests from Hedera paper (Part 1). (b) Benchmark tests from Hedera paper (Part 2).
Figure 8: Effective throughput with ECMP routing on ak=4Fat Tree vs. an equivalent non-blocking switch. Mininet-HiFi
vs. hardware testbed [13].
values (1, 2, 4 and 8), flows traverse more layers, degrading
throughput.
The Mininet-HiFi results closely match those from the
hardware testbed; in 16 of the 20 traffic patterns they are
nearly identical. In the remaining four traffic patterns
(randx2,3,4andstride8) the results in the paper have
lower throughput because - as the authors explain - the com-
mercial switch in their testbed is built from two switching
chips, so the total buffering depends on the traffic pattern.
To validate these results, we would need to know the map-
ping of hosts to switch ports, which is unavailable.
The main takeaway from this experiment is that Mininet-
HiFi reproduces theperformanceresults for this set of data-
center networking experiments. It appears possible to collect
meaningful results in advance of (or possibly without) set-
ting up a hardware testbed. If a testbed is built, the code
and test scripts used in Mininet-HiFi can be reused without
change.
Verifying fidelity:Unlike DCTCP, the Hedera experi-
ment depends on coarse-grained metrics such as aggregate
throughput over a period of time. To ensure that no virtual
host starved and that the system had enough capacity to sus-
tain the network demand, we measured idle time during the
experiment (as described in §3.4). In all runs, the system had
at least 35% idle CPU time every second. This measurement
indicates that the OS was able to schedule all virtual hosts
and packet transmissions without falling behind an ideal ex-
ecution schedule on hardware.
Lessons learned using Mininet-HiFi:In the Hedera
testbed, machines were equipped with 1 Gb/s network in-
terfaces. We were unable to use Mininet-HiFi to replicate
Hedera’s results even with 100 Mb/s network links, as the
virtual hosts did not have enough CPU capacity to saturate
their network links. While Hedera’s results do not qualita-
tively change when links are scaled down, it is a challenge
to reproduce results that depend on the absolute value of
link/CPU bandwidth.
5.3 Sizing Router Buffers
In our third example we reproduce results that were mea-
sured on a real hardware testbed to determine the number of
packet buffers needed by a router. The original research pa-
per onbuffer sizingwas presented at Sigcomm 2004 [35]. All
Internet routers contain buffers to hold packets during times
of congestion. The size of the buffers is dictated by the dy-
namics of TCP’s congestion control algorithm: the goal is to
make sure that when a link is congested, it is busy 100% of
the time, which is equivalent to making sure the buffer never
goes empty. Prior to the paper, the common assumption was
that each link needs a buffer of sizeB=RTT×C, where
RTTis the average round-trip time of a flow passing across
the link andCis the data-rate of the bottleneck link. The au-
thors showed that a link withnflows requires no more than
B=
RTT×C

n
. The original paper included results from sim-
ulation and measurements from a real router, but not for a
real network. Later, at Sigcomm 2008, Neda Beheshti demon-
strated a hardware these results on a hardware testbed run-
ning on the Internet2 backbone
7
We contacted the researchers and obtained results mea-
sured on their hardware testbed, then compared them with
results from Mininet-HiFi; the Mininet-HiFi topology is
shown in Figure9(a). In the hardware experiments, a num-
ber of TCP flows go from a server at Stanford University
(California) to at a server at Rice University (Houston, Texas)
via a NetFPGA IPv4 router in the Internet2 POP in Los Ange-
les. The link from LA to Houston is constrained to 62.5 Mb/s
7
Video of demonstration athttp://www.youtube.com/
watch?v=ykga6N_x27w.
Click&on&a&figure&
5&

Brings&up&a&webpage&of&instrucKons&
6&

Scroll&to&the&boZom…&
7&

Launch&an&EC2&instance&
8&

Run&a&command&in&the&terminal&to&
generate&results&
9&
>"~/mininet*tests/dctcp$"./run*dctcp.sh""

8&minutes&&&8&cents&aaer&the&click:&&
a&reproduced&result&
10&
>"~/mininet*tests/dctcp$"./run*dctcp.sh""
>"~/mininet*tests/dctcp/results$"xpdf"dctcp.pdf"

easier&to&build&upon&
the&work&of&others&
easier&to&transfer&
new&ideas&to&industry&
If&papers&were&runnable&(easily&reproducible):&
easier&to&understand&
and&evaluate&papers&
11&

Why&aren’t&all&
networking&research&
papers&like&this?&
12&

13&
Much,of,CS,
ML,&PL,&Security,&…&
Code&
Data&
Scripts&
Must&implement&mul<ple&&
servers,&network&elements,&links&–&&
all&running&in&parallel,&&
all&with&accurate,<ming&
Network,Systems&&
?&
CongesKon&control,&RouKng,&Topology,&…&
Use&any&commodity&PC&
(or&VM.)&

DiscretehEvent&SimulaKon&
14&
virtual&Kme&
link&
+
models&
queue&
switch&
protocol&
app&
on&
nsh2,&nsh3,&OPNET,&…&

But…&we&don’t&trust&simulaKon.&
Not&believable&unless&validated:&
Modeled&links&==&real&links&
Modeled&switches&==&real&switches&
Modeled&protocols&==&real&protocols&
Modeled&applicaKons&==&real&applicaKons&
,Realism,concerns.,
15&

16&

Testbed&results&can&be&hard&to&
(re)produce.&
Flexibility&
• Topology&restricKons&
• May&not&be&able&to&change&firmware&
Resource&availability&
• before&a&deadline?&
• one&year&later?&
17&

Problem,
Network&research&tends&not&to&be&both&
easily&reproducible&and&realis-c.&
18&
Solu<on,
Build&an&emulator&whose&results&you&
can&trust&as&well&as&verify.&

What&is&a&network&emulator?&
19&
Real&applicaKon,&network&stack&code&
Emulated&network&core&
&Matching&the&behavior&of&hardware:&
High&Fidelity&

20&
overlapping&events&
real&Kme&
…&
…&
…&
A B
Link
Time
Events
A: send request
B: init
xmit 1
xmit 2
B: send reponse
AB B HiFiAx1 x2BAB
LoFiA Bx1 B
Packet xmit
2
// A: Client
while(1) {
send_request(socket);
wait_for_reply(socket);
}
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
1
B
Real
SetupB: wait
busy
idle
x2
A B
Link
Time
Events
A: send request
B: init
xmit 1
xmit 2
B: send reponse
AB B HiFiAx1 x2BAB
LoFiA Bx1 B
Packet xmit
2
// A: Client
while(1) {
send_request(socket);
wait_for_reply(socket);
}
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
1
B
Real
SetupB: wait
busy
idle
x2
S&
Sources&of&Emulator&Infidelity&
Event,Overlap&

21&
real&Kme&
…&
…&
…&
A B
Link
Time
Events
A: send request
B: init
xmit 1
xmit 2
B: send reponse
AB B HiFiAx1 x2BAB
LoFiA Bx1 B
Packet xmit
2
// A: Client
while(1) {
send_request(socket);
wait_for_reply(socket);
}
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
1
B
Real
SetupB: wait
busy
idle
x2
A B
Link
Time
Events
A: send request
B: init
xmit 1
xmit 2
B: send reponse
AB B HiFiAx1 x2BAB
LoFiA Bx1 B
Packet xmit
2
// A: Client
while(1) {
send_request(socket);
wait_for_reply(socket);
}
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
1
B
Real
SetupB: wait
busy
idle
x2
S&
Sources&of&Emulator&Infidelity&
SoHware,Forwarding&
variable&delays&

22&
Our&Approach&
ResourcehIsolaKng&
Emulator&(MininethHiFi)&
HIGH!
Fidelity!
METER !
MEDIUM !
LOW!
Fidelity&&
Monitor&
+&
500&Mhz&
20&pkt&bufs/
port&
10&Mb/s,&
1ms&

Talk&Outline&
• MoKvaKon&
• 1.&Emulator&Fidelity&
• 2.&MininethHiFi&Architecture&
• 3.&Reproducing&Research&
• Related&Work&
• Progress&Report&
23&

1.&
Emulator&Fidelity&
24&

A&Workflow&for&High&Fidelity&EmulaKon&
25&
Create&
experiment&
Run&the&
experiment&
on&a&PC,&&
with&logging&
Analyze&
experiment&
fidelity&using&
“network&
invariants”&
Invariants&hold:&
High&Fidelity&
EmulaKon!&
Instance(s)&of&
behavior&that&
differ&from&
hardware&
Run&again:&
increase&resources&or&
reduce&experiment&scale&
1:&what&to&log?&
2:&which&
invariants?&
3:&how&close?&

What&to&log?&
26&
Consider&logging&uKlizaKon&of&the&emulator&CPU.&
100%&is&bad.&
But&is&X%&(say,&80%)&necessarily&good?&
Can’t&get&back&“lost&Kme”&in&an&emulator.&
 &CPU&uKlizaKon&is&insufficient.&&&
&Need&to&consider&fine7grained,event,<mings.,
80%& 20%&
…&busy&

27&
SinglehPacket&Invariants&
queue& link&switch&queue&
queuing&& propagaKon&forwarding&transmission&
log&enqueue,&dequeue&Kmestamps&

28&
Packet&Gap&Invariants&
queue& link&switch&queue&
packet&spacing&
(when&queue&occupied)&
R
measured
&≤&R
configured
&?&
&&&&link&capacity&
Explored&in&the&paper&

• Clock&dria&(==,one,packet)&
• NIC&to&memory&processing&(~=&25&packets)&
• Scheduler&nonhdeterminism&(~=&milliseconds)&
29&
High,Fidelity:,match&hardware&variability.&
How&close?&

Example&Workflow&for&One&Invariant&
30&
Create&
experiment&
Run&the&
experiment&
on&a&PC,&&
with&logging&
Analyze&
experiment&
fidelity&using&
“network&
invariants”&
Invariants&hold:&
High&Fidelity&
EmulaKon!&
Instance(s)&of&
behavior&that&
differ&from&
hardware&
Run&again:&
increase&resources&or&
reduce&experiment&scale&
2:&Measure&packet&&
spacing&
3:&Is&any&
packet&
delayed&by&
more&than&
one&packet&
Kme?&
1:&Log&
Dequeue&
Events&
If&this&workflow&is&valid,&“pass”&&same&result&as&
hardware.&
DCTCP&

Data&Center&TCP&(DCTCP)&
[Sigcomm10,&Alizadeh&et&al.],
Kme&
packets&in&
queue&
TCP&
DCTCP&
31&
marking&
threshold&
Queue&occupied&
100%&throughput&
Queue&occupied&
100%&throughput&
Packet&spacing&we&should&see:&

0306090120
Seconds
0
5
10
15
20
25
30
35
40
Packets
q-dctcp-plot.txt
Hardware&Results,&100&Mb/s&
32&
packets&
in&queue&
100%&throughput&
6&packets&variaKon&
queue&occupied&

Emulator&Results&
33&
Does&checking&an&invariant&(packet&
spacing)&idenKfy&wrong&results?&
same&result& wrong;&&
limits&exceeded&
80&Mb/s&
100%&tput&
6&pkts&var&
same&result&
160&Mb/s&
100%&tput&
6&pkts&var&
320&Mb/s&

Packet&Spacing&Invariant&w/DCTCP&
34&
1&pkt&
med.&low&high&
CCDF&
Percent&
(log)&
25&pkts&Error:&
(log)&
10%&of&the&Kme,&
error&exceeds&
one&packet&
x&

Percentage deviation from expected
0
1
10
100
Percent
35&
1&pkt&
error&
10&
20&
40&
numbers&are&in&Mb/s&
80&
CCDF&
Percent&
Packet&Spacing&Invariant&w/DCTCP&

Percentage deviation from expected
0
1
10
100
Percent
36&
10&
20&
40&
numbers&are&in&Mb/s&
80&
1&pkt&
error&
CCDF&
Percent&
Packet&Spacing&Invariant&w/DCTCP&

Percentage deviation from expected
0
1
10
100
Percent
37&
10&
20&
40&
numbers&are&in&Mb/s&
80&
CCDF&
Percent&
Packet&Spacing&Invariant&w/DCTCP&160&Mb/s:&failed&emulaKon?&
Beauty&of&networks&invariants&is&that&it&catches&
and&quanKfies&the&error&in&this&run.&

2:&
MininethHiFi&
Architecture&
38&

User space
Kernel space
vswitch
vswitch
vswitch
vhost
(container)
vhost
(container)
vhost
(container)
private network
namespace
10.0.0.1 10.0.0.2 10.0.0.3
process 1
process 2
process 3 process 4
Linux Server
private network
namespace
private network
namespace
virtual ethernet pairs
root network
namespace
39&
Emulator,
Containers&w/
Network&
Namespaces&
Original&Mininet,

User space
Kernel space
vswitch
vswitch
vswitch
vhost
(container)
vhost
(container)
vhost
(container)
private network
namespace
10.0.0.1 10.0.0.2 10.0.0.3
process 1
process 2
process 3 process 4
Isolated links
(e.g, 100mbps, 2ms)
Linux Server
private network
namespace
private network
namespace
virtual ethernet pairs
root network
namespace
40&
Emulator&+&Performance&IsolaKon,
Linux&
packet&
schedulers&
HTB,&HFSC&
Emulator&+&Performance&IsolaKon,

User space
Kernel space
vswitch
vswitch
vswitch
vhost
(container)
vhost
(container)
vhost
(container)
private network
namespace
10.0.0.1 10.0.0.2 10.0.0.3
process 1
process 2
process 3 process 4
Isolated links
(e.g, 100mbps, 2ms)
Isolated hosts
(e.g., 20% CPU)
Linux Server
private network
namespace
private network
namespace
virtual ethernet pairs
root network
namespace
41&
Linux&
process&
schedulers&
CFS+BWlimits,&
RT&
Emulator&+&Performance&IsolaKon,

User space
Kernel space
vswitch
vswitch
vswitch
vhost
(container)
vhost
(container)
vhost
(container)
private network
namespace
10.0.0.1 10.0.0.2 10.0.0.3
process 1
process 2
process 3 process 4
Monitoring Tools
Isolated links
(e.g, 100mbps, 2ms)
Isolated hosts
(e.g., 20% CPU)
Linux Server
private network
namespace
private network
namespace
virtual ethernet pairs
root network
namespace
42&
Emulator&+&Performance&IsolaKon&+&Invariant&Monitoring,
Linux&
Kernel&
Tracing&&
enqueue,&
dequeue,&etc.&

3:&
Reproducing&
Research&
43&

Examples&in&the&paper&
• DCTCP&[Alizadeh,&SIGCOMM&2010]&
• Router&Buffer&Sizing&[Appenzeller,&SIGCOMM&
2004]&
• Hedera&ECMP&[AlhFares,&NSDI&2010]&
Able&to&replicate&key&results&from&3&testbeds&…&&
using&an&emulator.&
44&

How&do&you&know&it&
really&works?&
45&
Test&it.&On&guinea&pigs&
students.&

46&
Stanford&CS244&Spring&’12:&
Advanced&Topics&in&Networking&

→&Pick&a&paper.&&
→&Reproduce&a&key&result,&or&
challenge&it&(with&data).&
→&You&have:&
$100&EC2&credit,&
3&weeks,&and&
must&use&MininethHiFi.&
47&

CoDel&
HULL&
MPTCP&
Outcast&
Jellyfish&
DCTCP&
Incast&
Flow&CompleKon&Time&
Hedera&
DCell&
TCP&IniKal&CongesKon&
Window&
Misbehaving&TCP&Receivers&
RED&
Project&Topics:&
Transport,&&
Data&Center,&
Queuing&
48&

CoDel&
HULL&
MPTCP&
Outcast&
Jellyfish&
DCTCP&
Incast&
Flow&CompleKon&Time&
Hedera&
DCell&
TCP&IniKal&CongesKon&
Window&
Misbehaving&TCP&Receivers&
RED&
49&
37&students&
18&projects&
16&replicated&

CoDel,
HULL&
MPTCP&
Outcast,
Jellyfish,
DCTCP&
Incast&
Flow&CompleKon&Time&
Hedera&
DCell,
TCP&IniKal&CongesKon&
Window&
Misbehaving&TCP&Receivers&
RED&
37&students&
18&projects&
16&replicated&
4,beyond,
50&

CoDel,
HULL&
MPTCP&
Outcast,
Jellyfish,
DCTCP&
Incast&
Flow&CompleKon&Time&
Hedera&
DCell,
TCP&IniKal&CongesKon&
Window&
Misbehaving&TCP&Receivers&
RED&
37&students&
18&projects&
16&replicated&
4,beyond,
2&not&replicated&
51&

CoNEXT&‘12&runnable&papers?&
15/31&seem&like&candidates:&&&
• MPTCP&is&not&ParetohopKmal:&Performance&issues&and&a&possible&soluKon&&
• ArchitecKng&for&Edge&Diversity:&SupporKng&Rich&Services&over&an&Unbundled&Transport&
• Tuning&ECN&for&Data&Center&Networks&
• Datacast:&A&Scalable&and&Efficient&Reliable&Group&Data&Delivery&Service&for&Data&Centers&&
• PAST:&Scalable&Ethernet&for&Data&Centers&
• Improving&Fairness,&Efficiency,&and&Stability&in&HTTPhbased&AdapKve&Video&Streaming&with&FESTIVE&
• Towards&Agile&and&Smooth&Video&AdapKon&in&Dynamic&HTTP&Streaming&
• ApplicaKonhaware&Request&Spli?ng&for&InteracKve&Cloud&ApplicaKons&
• AutomaKc&Test&Packet&GeneraKon&
• FindAll:&A&Local&Search&Engine&for&Mobile&Phones&
• A&SOFT&Way&for&OpenFlow&Switch&Interoperability&TesKng&
• Defending&against&largehscale&crawls&in&online&social&networks&
• BotFinder:&Finding&Bots&in&Network&Traffic&Without&Deep&Packet&InspecKon&
• Cachet:&A&Decentralized&Architecture&for&Privacy&Preserving&Social&Networking&with&Caching&&
• New&OpportuniKes&for&Load&Balancing&in&NetworkhWide&Intrusion&DetecKon&Systems&
52&
NOT:&Wireless,&Modeling,&Hardware,&Social&
Networking,&Security&

Related&Work&
53&

Related&Work&
• vEmulab:&scalehout&emulaKon&[ATC08]&
• DieCast:&Kme&dilaKon&[NSDI07]&&
• SliceTime:&synchronized&Kme&slices&[NSDI11]&
All&are&complementary&techniques&that&
could&be&added&to&MininethHiFi.&
None&measure&event&fidelity&(S1)&
Last&two&use&fullhsystem&virtualizaKon&(S2)&&&
None&evaluate&reproducibility&at&scale&(S3)&
54&

Progress&Report:&
Making&runnable&
the&networkhpaper&
default.&
55&

Runnable&Paper&Existence&Proof&
56&
(a) Benchmark tests from Hedera paper (Part 1). (b) Benchmark tests from Hedera paper (Part 2).
Figure 8: Effective throughput with ECMP routing on ak=4Fat Tree vs. an equivalent non-blocking switch. Mininet-HiFi
vs. hardware testbed [13].
values (1, 2, 4 and 8), flows traverse more layers, degrading
throughput.
The Mininet-HiFi results closely match those from the
hardware testbed; in 16 of the 20 traffic patterns they are
nearly identical. In the remaining four traffic patterns
(randx2,3,4andstride8) the results in the paper have
lower throughput because - as the authors explain - the com-
mercial switch in their testbed is built from two switching
chips, so the total buffering depends on the traffic pattern.
To validate these results, we would need to know the map-
ping of hosts to switch ports, which is unavailable.
The main takeaway from this experiment is that Mininet-
HiFi reproduces theperformanceresults for this set of data-
center networking experiments. It appears possible to collect
meaningful results in advance of (or possibly without) set-
ting up a hardware testbed. If a testbed is built, the code
and test scripts used in Mininet-HiFi can be reused without
change.
Verifying fidelity:Unlike DCTCP, the Hedera experi-
ment depends on coarse-grained metrics such as aggregate
throughput over a period of time. To ensure that no virtual
host starved and that the system had enough capacity to sus-
tain the network demand, we measured idle time during the
experiment (as described in §3.4). In all runs, the system had
at least 35% idle CPU time every second. This measurement
indicates that the OS was able to schedule all virtual hosts
and packet transmissions without falling behind an ideal ex-
ecution schedule on hardware.
Lessons learned using Mininet-HiFi:In the Hedera
testbed, machines were equipped with 1 Gb/s network in-
terfaces. We were unable to use Mininet-HiFi to replicate
Hedera’s results even with 100 Mb/s network links, as the
virtual hosts did not have enough CPU capacity to saturate
their network links. While Hedera’s results do not qualita-
tively change when links are scaled down, it is a challenge
to reproduce results that depend on the absolute value of
link/CPU bandwidth.
5.3 Sizing Router Buffers
In our third example we reproduce results that were mea-
sured on a real hardware testbed to determine the number of
packet buffers needed by a router. The original research pa-
per onbuffer sizingwas presented at Sigcomm 2004 [35]. All
Internet routers contain buffers to hold packets during times
of congestion. The size of the buffers is dictated by the dy-
namics of TCP’s congestion control algorithm: the goal is to
make sure that when a link is congested, it is busy 100% of
the time, which is equivalent to making sure the buffer never
goes empty. Prior to the paper, the common assumption was
that each link needs a buffer of sizeB=RTT×C, where
RTTis the average round-trip time of a flow passing across
the link andCis the data-rate of the bottleneck link. The au-
thors showed that a link withnflows requires no more than
B=
RTT×C

n
. The original paper included results from sim-
ulation and measurements from a real router, but not for a
real network. Later, at Sigcomm 2008, Neda Beheshti demon-
strated a hardware these results on a hardware testbed run-
ning on the Internet2 backbone
7
We contacted the researchers and obtained results mea-
sured on their hardware testbed, then compared them with
results from Mininet-HiFi; the Mininet-HiFi topology is
shown in Figure9(a). In the hardware experiments, a num-
ber of TCP flows go from a server at Stanford University
(California) to at a server at Rice University (Houston, Texas)
via a NetFPGA IPv4 router in the Internet2 POP in Los Ange-
les. The link from LA to Houston is constrained to 62.5 Mb/s
7
Video of demonstration athttp://www.youtube.com/
watch?v=ykga6N_x27w.read,&click,&reproduce&

Reproduced&Research&Examples&
reproducingnetworkresearch.wordpress.com&
(or&Google&“reproducing&network&research”)&
57&20&and&counKng&

OpenhSource&System&w/AcKve&User&Community&
58&
our&guess:&~1000&users&
Fall&2013:&~10K,users,
mininet.github.com,

New&Workflows&
Idea&&Simulate&&Run(Testbed)&&Publish&
59&
Idea&&Run(EmulaKon)&&Publish&
Reproduce(EmulaKon)&
Idea&&Simulate&&Run(Testbed)&&Publish&
Run(EmulaKon)&&

(aside)&&
Why&God&doesn't&
have&a&Ph.D.&
1)&&He&had&only&one&major&publicaKon.&
2)&&It&was&in&Hebrew.&
3)&&It&had&no&references.&
4)&&It&wasn't&published&in&a&referreed&journal.&
5)&&Some&even&doubt&he&wrote&it&by&himself.&
6)&&It&may&be&true&that&he&created&the&world,&but&what&has&he&done&since&then?&
7)&&His&cooperaKve&efforts&have&been&quite&limited.&
8)&&The,scien<fic,community,has,had,a,hard,<me,replica<ng,his,results.,
9)&&He&never&applied&to&the&ethics&board&for&permission&to&use&human&subjects.&
10)&When&one&experiment&went&awry&he&tried&to&cover&it&up&by&drowning&his&subjects.&
11)&When&subjects&didn't&behave&as&predicted,&he&deleted&them&from&the&sample.&
12)&Some&say&he&had&his&son&teach&the&class.&
13)&He&expelled&his&first&two&students&for&learning.&
14)&He&rarely&came&to&class,&and&he&just&told&students&to&read&the&book.&
15)&Although&there&were&only&10&requirements,&most&of&his&students&failed&his&tests.&
16)&His&office&hours&were&infrequent&and&usually&held&on&a&mountaintop.&
Source:&hZp://wwwhpsych.stanford.edu/~pinto/god.html&
60&

Look&for&the&shirt.&&QuesKons?&
61&
mininet.github.com&
reproducingnetworkresearch.wordpress.com&

Backup&Slides&
62&

Doesn’t&a&simulator&guarantee&these&
invariants?&
• Yes,&exactly!&&A&good&one&will.&
• We’re&trying&to&get&the&network&fidelity&of&an&
emulator&to&match&a&simulator&with&virtual&
Kme.&
63&

What&about&an&RTOS?&
• Every&process&must&be&boundedhKme.&
• Requires&kernel&mods.&
• ConservaKve&provisioning&make&the&resulKng&
system&too&resourcehlimited&to&be&useful.&
• May&needlessly&limit&resources&when&they&
could&be&used.&
64&

65&
Host&Invariants&
host&with&capacity&C&
C
measured
&≤&C
configured
&?&
C
emulator&
hardware& C&
C&x&5&
&&&&host&capacity:&
20%&?&log&scheduler&events&

66&
Host&Invariants&
C
emulator&
hardware&
C&x&2&
&&&&&scheduler&intervals&
host&with&capacity&C&
C&
C&
log&scheduler&events&