Lec 14 | MIT 6.046J / 18.410J Introduction to Algorithms (SMA 5503), Fall 2005
-
0:13 - 0:13And this is going to use some
of the techniques we learned -
0:20 - 0:20last time with respect to
amortized analysis. -
0:26 - 0:26And, what's neat about what
we're going to talk about today -
0:33 - 0:33is it's a way of comparing
algorithms that are so-called -
0:40 - 0:40online algorithms.
And we're going to introduce -
0:48 - 0:48this notion with a problem which
is called self organizing lists. -
1:00 - 1:00OK, and so the set up for this
problem is that we have a list, -
1:10 - 1:10L, of n elements.
And, we have an operation. -
1:18 - 1:18Woops, I've got to spell things
right, access of x, -
1:26 - 1:26which accesses item x in the
list. -
1:33 - 1:33It could be by searching,
or it could be however you want -
1:40 - 1:40to do it.
But basically, -
1:43 - 1:43it goes and touches that
element. -
1:47 - 1:47And we were to say the cost of
that operation is whatever the -
1:54 - 1:54rank of x is in the list,
which is just the distance of x -
2:01 - 2:01from the head of the list.
And the other thing that we can -
2:10 - 2:10do that the algorithm can do,
so this is what the user would -
2:18 - 2:18do.
He just simply runs a whole -
2:22 - 2:22bunch of accesses on the list,
OK, accessing one element after -
2:30 - 2:30another in any order that he or
she cares to. -
2:36 - 2:36And then, L can be reordered,
however, by transposing -
2:43 - 2:43adjacent elements.
And the cost for that is one. -
2:53 - 2:53So, for example,
suppose the list is the -
3:01 - 3:01following.
-
3:21 - 3:21OK, I missed something here.
It doesn't matter. -
3:30 - 3:30Well, I'll just make it be what
I have so that it matches the -
3:41 - 3:41online video.
OK, so here we have a list. -
3:47 - 3:47And so, if I do something like
access element 14 here, -
3:52 - 3:52the element of key 14,
OK, that this costs me one, -
3:57 - 3:57two, three, four.
So here the cost is four to -
4:02 - 4:02access.
And so, we're going to have -
4:05 - 4:05some sequence of accesses that
the user is going to do. -
4:10 - 4:10And obviously,
if something is accessed more -
4:15 - 4:15frequently, we'd like to move it
up to the front of the list so -
4:21 - 4:21that you don't have to search as
far. -
4:25 - 4:25OK, and to do that,
if I want to transpose -
4:29 - 4:29something, so,
for example, -
4:31 - 4:31if I transpose three and 50,
that just costs me one. -
4:38 - 4:38OK, so then I would make this
be 50, and make this be three. -
4:44 - 4:44OK, sorry, normally you just do
it by swapping pointers. -
4:49 - 4:49OK, so those are the two
operations. -
4:53 - 4:53And, we are going to do this in
what's called an online fashion. -
4:59 - 4:59So, let's just define online.
So, a sequence, -
5:06 - 5:06S, of operations is provided
one at a time for each -
5:17 - 5:17operation.
An online algorithm must -
5:24 - 5:24execute the operation
immediately without getting a -
5:35 - 5:35chance to look at what else is
coming in the sequence. -
5:48 - 5:48So, when you make your decision
for the first element, -
5:51 - 5:51you don't get to see ahead as
to what the second, -
5:54 - 5:54or third, or whatever is.
And the second one you get, -
5:57 - 5:57you get and you have to make
your decision as to what to do -
6:01 - 6:01and so forth.
So, that's an online algorithm. -
6:06 - 6:06Similarly, an off-line
algorithm, OK, -
6:11 - 6:11may see all of S in advance.
OK, so you can see an off-line -
6:19 - 6:19algorithm gets to see the whole
sequence, and then decide what -
6:27 - 6:27it wants to do about element
one, element two, -
6:33 - 6:33or whatever.
OK, so an off-line algorithm -
6:38 - 6:38can look at the whole sequence
and say, OK, I can see that item -
6:42 - 6:42number 17 is being accessed a
lot, or early on move him up -
6:46 - 6:46closer to the front of the list,
and then the accesses cost less -
6:50 - 6:50for the off-line algorithm.
The online algorithm doesn't -
6:53 - 6:53get to see any of that.
OK, so this is sort of like, -
6:57 - 6:57if you're familiar with the
game Tetris. -
7:00 - 7:00OK, and Tetris,
you get one shape after another -
7:03 - 7:03that starts coming down,
and you have to twiddle it, -
7:06 - 7:06and move it to the side,
and drop it into place. -
7:08 - 7:08And there, sometimes you get a
one step look-ahead on some of -
7:12 - 7:12them so you can see what the
next shape is, -
7:14 - 7:14but often it's purely online.
You don't get to see that next -
7:18 - 7:18shape or whatever,
and you have to make a decision -
7:21 - 7:21for each one.
And you make a decision, -
7:23 - 7:23and you realize that the next
shape, ah, if you had made a -
7:26 - 7:26different decision it would have
been better. -
7:29 - 7:29OK, so that's the kind of
problem. -
7:32 - 7:32Off-line Tetris would be,
I get to see the whole sequence -
7:38 - 7:38of shapes.
And now let me decide what I'm -
7:43 - 7:43going to do with this one.
OK, and so, in this, -
7:48 - 7:48the goal is for any of the
algorithms, either online or -
7:54 - 7:54off-line is to minimize the
total cost, which we'll denote -
8:00 - 8:00by, I forgot to name this.
This is algorithm A here. -
8:06 - 8:06The total cost,
C_A of S, OK, -
8:09 - 8:09so the cost of algorithm A on
the sequence, -
8:13 - 8:13S.
That's just the notation we'll -
8:17 - 8:17use for what the total cost is.
So, any questions about the -
8:22 - 8:22setup to this problem?
So, we have an online problem. -
8:28 - 8:28We're going to get these things
one at a time, -
8:32 - 8:32OK, and we have to decide what
to do. -
8:37 - 8:37So, let's do a worst-case
analysis for this. -
8:52 - 8:52OK, so if we're doing a
worst-case analysis, -
8:56 - 8:56we can view that we have an
adversary that we are playing -
9:02 - 9:02against who's going to provide
the sequence. -
9:06 - 9:06The user is going to be able to
see what we do. -
9:11 - 9:11And so, what's the adversary
strategy? -
9:14 - 9:14Thwart our plots,
yes, that's his idea. -
9:18 - 9:18And how is he going to thwart
them, or she? -
9:22 - 9:22Which is what for this problem?
What's he going to do? -
9:29 - 9:29Yeah.
No matter how we reorder -
9:32 - 9:32elements using the transposes,
he's going to look at every -
9:39 - 9:39step and say,
what's the last element? -
9:43 - 9:43That's the one I'm going to
access, right? -
9:48 - 9:48So, the adversary always,
always accesses the tail -
9:54 - 9:54element of L.
No matter what it is, -
9:58 - 9:58no matter how we reorder
things, OK, for each one, -
10:04 - 10:04adversary just accesses the
tail. -
10:09 - 10:09So the cost of this,
of any algorithm, -
10:14 - 10:14then, is going to be omega size
of S times n, -
10:19 - 10:19OK, because you're always going
to pay for every sequence. -
10:26 - 10:26You're going to have to go in
to pay a cost of n, -
10:32 - 10:32OK, for every element in the
sequence. -
10:36 - 10:36OK, so not terribly in the
worst-case. -
10:42 - 10:42Not terribly good.
So, people in studying this -
10:46 - 10:46problem: question?
That analysis is for the online -
10:51 - 10:51algorithm, right.
The off-line algorithm, -
10:54 - 10:54right, if you named those
things, that's right. -
10:59 - 10:59OK, so we're looking at trying
to solve this in an off-line -
11:04 - 11:04sense, sorry,
in an online sense. -
11:06 - 11:06OK, and so the point is that
for the online algorithm, -
11:10 - 11:10the adversary can be incredibly
mean, OK, and just always access -
11:15 - 11:15the thing at the end,
OK? -
11:17 - 11:17So, what sort of the history of
this problem is, -
11:21 - 11:21that people said,
well, if I can't do well in the -
11:25 - 11:25worst-case, maybe I should be
looking at average case, -
11:29 - 11:29OK, and look at,
say, the different elements -
11:33 - 11:33having some probability
distribution. -
11:37 - 11:37OK, so the average case
analysis, OK, -
11:45 - 11:45let's suppose that element x is
accessed with probability, -
11:57 - 11:57P of x.
OK, so suppose that we have -
12:04 - 12:04some a priori distribution on
the elements. -
12:14 - 12:14OK, then the expected cost of
the algorithm on a sequence, -
12:21 - 12:21OK, so if I put all the
elements into some order, -
12:27 - 12:27OK, and don't try to reorder,
but just simply look at, -
12:33 - 12:33is there a static ordering that
would work well for a -
12:39 - 12:39distribution?
It's just going to be, -
12:44 - 12:44by definition of expectation,
the probability of x times, -
12:50 - 12:50in this case,
the cost, which is the rank of -
12:54 - 12:54x in whatever that ordering is
that I decide I'm going to use. -
13:01 - 13:01OK, and this is minimized when?
So, this is just the definition -
13:09 - 13:09of expectations:
the probability that I access x -
13:15 - 13:15times the cost summed over all
the elements. -
13:21 - 13:21And the cost is just going to
be its position in the list. -
13:29 - 13:29So, when is this value,
this summation, -
13:34 - 13:34going to be minimized?
When the element is most likely -
13:41 - 13:41as the lowest rank,
and then what, -
13:44 - 13:44what about the other element?
OK, so what does that mean? -
13:50 - 13:50Yeah, sort them,
yeah, sort them on the basis of -
13:55 - 13:55decreasing probability,
OK? -
13:58 - 13:58So, it's minimized when L is
sorted, OK, in decreasing order -
14:05 - 14:05with respect to P.
OK, so just sort them with the -
14:10 - 14:10most likely one at the front,
and then just decreasing -
14:16 - 14:16probability.
That way, whenever I access -
14:21 - 14:21something with some probability,
OK, I'm going to access, -
14:27 - 14:27it's more likely that I'm going
to access. -
14:33 - 14:33And that's not too difficult to
actually prove. -
14:37 - 14:37You just look at,
suppose there were two that -
14:42 - 14:42were out of order,
and show that if you swap them, -
14:46 - 14:46you would improve this
optimization function. -
14:51 - 14:51OK, so if you didn't know it,
this suggests the following -
14:56 - 14:56heuristic,
OK, which is simply keep -
15:05 - 15:05account of the number of times
each element is accessed, -
15:22 - 15:22and maintain the list in order
of decreasing count. -
15:38 - 15:38OK, so whenever something is
accessed, increment its count, -
15:42 - 15:42OK, and that will move it,
at most, one position, -
15:45 - 15:45which only costs me one
transposed to move it, -
15:48 - 15:48perhaps, forward.
OK, actually, -
15:51 - 15:51I guess it could be more if you
have a whole bunch of ties, -
15:55 - 15:55right?
Yeah. -
15:55 - 15:55So, it could cost more.
But the idea is, -
15:58 - 15:58over time, the law of large
numbers says that this is going -
16:02 - 16:02to approach the probability
distribution. -
16:06 - 16:06The frequency with which you
access this, divided by the -
16:11 - 16:11total number of accesses,
will be the probability. -
16:16 - 16:16And so, therefore you will get
things in decreasing -
16:20 - 16:20probability, OK,
assuming that there is some -
16:24 - 16:24distribution that all of these
elements are chosen according -
16:29 - 16:29to, or accessed according to.
So, it doesn't seem like -
16:36 - 16:36there's that much more you could
really do here. -
16:40 - 16:40And that's why I think this
notion of competitive analysis -
16:46 - 16:46is so persuasive,
because it's really amazingly -
16:51 - 16:51strong, OK?
And it came about because of -
16:55 - 16:55what people were doing in
practice. -
17:00 - 17:00So practice,
what people implement it was a -
17:04 - 17:04so-called move to front
heuristic. -
17:07 - 17:07OK, and the basic idea was,
after you access an element, -
17:13 - 17:13just move it up to the front.
OK, that only doubles the cost -
17:19 - 17:19of accessing the element because
I go and I access it, -
17:24 - 17:24chasing it down paying the
rank, and then I have to do rank -
17:30 - 17:30number of transposes to bring it
back to the front. -
17:36 - 17:36So, it only cost me a factor of
two, and now, -
17:39 - 17:39if it happens to be a
frequently accessed elements, -
17:43 - 17:43over time you would hope that
the most likely elements were -
17:47 - 17:47near the front of that list.
So, after accessing x, -
17:55 - 17:55move x, the head of the list
using transposes, -
18:05 - 18:05and the cost is just equal to
twice the rank in L of x, -
18:18 - 18:18OK, where the two here has two
parts. -
18:26 - 18:26One is the access,
and the other is the -
18:34 - 18:34transposes.
OK, so that's sort of what they -
18:41 - 18:41did.
And one of the nice properties -
18:43 - 18:43of this is that if it turns out
that there is locality in the -
18:47 - 18:47access pattern,
if it's not just a static -
18:50 - 18:50distribution,
but rather once I've accessed -
18:53 - 18:53something, if it's more likely
I'm going to access it again, -
18:57 - 18:57which tends to be the case for
many input types of patterns, -
19:01 - 19:01this responds well to locality
because it's going to be up near -
19:05 - 19:05the front if I access it very
soon after I've accessed. -
19:10 - 19:10So, there is what's called
temporal locality, -
19:15 - 19:15meaning that in time,
I tend to access, -
19:19 - 19:19so it may be that I access some
thing's very hot for awhile; -
19:26 - 19:26then it gets very cold.
This type of algorithm responds -
19:32 - 19:32very well to the hotness of the
accessing. -
19:37 - 19:37OK, so it responds well to
locality in S. -
19:43 - 19:43So, this is sort of what was
known up to the point that a -
19:48 - 19:48very famous paper was written by
Danny Sleator and Bob Tarjan, -
19:54 - 19:54where they took a totally
different approach to looking at -
19:59 - 19:59this kind of problem.
OK, and it's an approach that -
20:04 - 20:04matter you see everywhere from
analysis of caching and -
20:09 - 20:09high-performance processors to
analyses of disk paging to just -
20:15 - 20:15a huge number of applications of
this basic technique. -
20:21 - 20:21And, that's the technique of
competitive analysis. -
20:30 - 20:30OK, so here's the definition.
So, online algorithm A is alpha -
20:45 - 20:45competitive.
If there exists a constant, -
20:55 - 20:55k, such that for any sequence,
S, of operations, -
21:08 - 21:08the cost of S using algorithm A
is bounded by alpha times the -
21:24 - 21:24cost of opt, where opt is the
optimal offline algorithm. -
21:39 - 21:39OK, so the optimal off-line,
the one that knows the whole -
21:43 - 21:43sequence and does the absolute
best it could do on that -
21:48 - 21:48sequence, OK,
that's this cost here. -
21:50 - 21:50This is sometimes called God's
algorithm, OK, -
21:54 - 21:54not to bring religion into the
classroom, or to offend anybody, -
21:59 - 21:59but that is what people
sometimes call it. -
22:03 - 22:03OK, so the fully omniscient
knows absolutely the best thing -
22:08 - 22:08that could be done,
sees into the future, -
22:11 - 22:11the whole works,
OK? -
22:12 - 22:12It gets to apply that.
That's what opts algorithm is. -
22:16 - 22:16And, what we're saying is that
the cost is basically whatever -
22:21 - 22:21this alpha factor is.
It could be a function of -
22:25 - 22:25things, or it could be a
constant, OK, -
22:28 - 22:28times whatever the best
algorithm is. -
22:31 - 22:31Plus, there's a potential for a
constant out here. -
22:36 - 22:36OK, so for example,
if alpha is two, -
22:39 - 22:39and we say it's two
competitive, that means you're -
22:43 - 22:43going to do, at worst,
twice the algorithm that has -
22:47 - 22:47all the information.
But you're doing it online, -
22:51 - 22:51for example.
OK, it's a really pretty -
22:54 - 22:54powerful notion.
And what's interesting about -
22:58 - 22:58this, it's not even clear these
things should exist to my mind. -
23:04 - 23:04OK, what's pretty remarkable
about this, I think, -
23:08 - 23:08is that there is no assumption
of distribution, -
23:12 - 23:12of probability distribution or
anything. -
23:16 - 23:16It's whatever the sequence is
that you give it. -
23:20 - 23:20You are within a factor of
alpha, essentially, -
23:24 - 23:24of the best algorithm,
OK, which is pretty remarkable. -
23:30 - 23:30OK, and so, we're going to
prove the following theorem, -
23:38 - 23:38which is the one that Sleator
and Tarjan proved. -
23:46 - 23:46And that is that MTF is four
competitive for self organizing -
23:55 - 23:55lists.
OK, so the idea here is that -
23:59 - 23:59suppose the adversary says,
oh, I'm always going to access -
24:03 - 24:03the thing at the end of the list
like we said in the beginning. -
24:07 - 24:07So, the adversary says,
I'm always going to access the -
24:10 - 24:10thing there.
I'm going to make MTF work -
24:13 - 24:13really bad, because you're going
to go and move that thing all -
24:17 - 24:17the way up to the front.
And I'm just going to access -
24:20 - 24:20the thing way at the end again.
OK, well it turns out, -
24:24 - 24:24yeah, that's a bad sequence for
move to front, -
24:27 - 24:27OK, and it will take a long
time. -
24:30 - 24:30But it turns out God couldn't
have done better, -
24:35 - 24:35OK, by more than a factor of
four no matter how long the list -
24:41 - 24:41is.
OK, that's pretty amazing. -
24:44 - 24:44OK, so that's a bad sequence.
But, if there's a way that the -
24:50 - 24:50sequence exhibits any kind of
locality or anything that can be -
24:56 - 24:56taken advantage of,
if you could see the whole -
25:00 - 25:00thing, MTF takes advantage of it
too, OK, within a factor of -
25:06 - 25:06four.
OK, it's a pretty remarkable -
25:10 - 25:10theorem, and it's the basis of
many types of analysis of online -
25:15 - 25:15algorithms.
Almost all online algorithms -
25:18 - 25:18today are analyzed using some
kind of competitive analysis. -
25:22 - 25:22OK, not always.
Sometimes you do probabilistic -
25:25 - 25:25analysis, or whatever,
but the dominant thing is too -
25:29 - 25:29competitive analysis because
then you don't have to make any -
25:33 - 25:33statistical assumptions.
OK, just prove that it works -
25:38 - 25:38well no matter what.
This is remarkable, -
25:41 - 25:41I think.
Isn't it remarkable? -
25:43 - 25:43So, let's prove this theorem,
we're just going to spend the -
25:47 - 25:47rest of the lecture on this
proof. -
25:50 - 25:50OK, and the proof in some ways
is not hard, but it's also not -
25:54 - 25:54necessarily completely
intuitive. -
25:56 - 25:56So, you will have to pay
attention. -
26:00 - 26:00OK, so let's get some notation
down. -
26:09 - 26:09Let's let L_i be MTF's list
after the i'th access. -
26:23 - 26:23And, let's let L be opt's list
after the i'th access. -
26:38 - 26:38OK, so generally what I'll do
is I'll put a star if we are -
26:43 - 26:43talking about opt,
and have nothing if we're -
26:46 - 26:46talking about MTF.
OK, so that's going to be the -
26:50 - 26:50list.
So, we can say, -
26:52 - 26:52what's the list?
So, we're going to set it up -
26:55 - 26:55where we have one in operation
that transforms list i minus one -
27:01 - 27:01into list i.
OK, that's what the i'th -
27:04 - 27:04operation does.
OK, and move to front does it -
27:09 - 27:09by moving whatever the thing
that was accessed to the front. -
27:16 - 27:16And opt does whatever opt
thinks is the best thing to do. -
27:23 - 27:23We don't know.
So, we're going to let c_i be -
27:28 - 27:28MTF's cost for the I'th
operation. -
27:33 - 27:33And that's just twice the rank
in L_i minus one of x if the -
27:41 - 27:41operation accesses x,
OK, two times the rank in L_i -
27:47 - 27:47minus one because we're going to
be accessing it in L_i minus one -
27:55 - 27:55and transforming it into L_i.
And similarly, -
28:02 - 28:02we'll let c_i star be opt's
cost for the i'th operation. -
28:09 - 28:09And that's just equal to,
well, to access it, -
28:15 - 28:15it's going to be the rank in
L_i minus one star, -
28:21 - 28:21whatever its list is of x at
that step, because it's got to -
28:29 - 28:29access it.
And then, some number of -
28:35 - 28:35transposes, t_i if opt forms t_i
transposes. -
28:42 - 28:42OK, so we have the setup where
we have two different lists that -
28:52 - 28:52are being managed,
and we have different costs in -
28:59 - 28:59the list.
And, we're interested in is -
29:05 - 29:05comparing in some way MTF's list
with opt's list at any point in -
29:11 - 29:11time.
And, how do you think we're -
29:14 - 29:14going to do that?
What technique do you think we -
29:19 - 29:19should use to compare these two
lists, general technique from -
29:25 - 29:25last lecture?
Well, it is going to be -
29:29 - 29:29amortized, but what?
How are we going to compare -
29:36 - 29:36them?
What technique did we learn -
29:40 - 29:40last time?
Potential function, -
29:43 - 29:43good.
OK, we're going to define a -
29:48 - 29:48potential function,
OK, that measures how far apart -
29:54 - 29:54these two lists are.
OK, and the idea is, -
29:59 - 29:59if, let's define that and then
we'll take a look at it. -
30:08 - 30:08So, we're going to define the
potential function phi mapping -
30:22 - 30:22the set of MTF's lists into the
real numbers by the following. -
30:37 - 30:37phi of L_i is going to be twice
the cardinality of this set. -
31:04 - 31:04OK, so this is the
precedes-operation and list, -
31:09 - 31:09i.
So, we can define a -
31:11 - 31:11relationship between any two
elements that says that x -
31:16 - 31:16precedes y in L_i if,
as I'm accessing it from the -
31:21 - 31:21head, I hit x first.
OK, so what I'm interested in, -
31:26 - 31:26here, are in some sense the
disagreements between the two -
31:31 - 31:31lists.
This is where x precedes y in -
31:36 - 31:36MTF's list, but y precedes x in
opt's list. -
31:39 - 31:39They disagree,
OK? -
31:41 - 31:41And, what we're interested in
is the cardinality of the set. -
31:46 - 31:46And we're going to multiply it
by two. -
31:49 - 31:49OK, so that's equal to two
times; so there is a name for -
31:54 - 31:54this type of thing.
We saw that when we were doing -
31:58 - 31:58sorting.
Anybody remember the name? -
32:03 - 32:03It was very briefly.
I don't expect anybody to -
32:09 - 32:09remember, but somebody might.
Inversions: good, -
32:14 - 32:14OK, twice the number of
inversions. -
32:18 - 32:18So, let's just do an example.
So, let's say L_i is the list -
32:26 - 32:26with five elements.
OK, I'll use characters for the -
32:38 - 32:38order just to keep things
simple. -
32:47 - 32:47So, in this case phi of L_i is
going to be twice the -
33:02 - 33:02cardinality of the set.
So what we want to do is see -
33:13 - 33:13which things are out of order.
So here, I look at E and C are -
33:18 - 33:18in this order,
but C and E in that order. -
33:22 - 33:22So, those are out of order.
So, that counts as one of my -
33:27 - 33:27elements, EC,
and then, E and A, -
33:29 - 33:29A and E.
OK, so those are out of order, -
33:36 - 33:36and then ED,
DE, out of order, -
33:42 - 33:42and then EB,
BE, those are out of order. -
33:49 - 33:49And now, I go C,
A, C, A. -
33:53 - 33:53Those are in order,
so it doesn't count. -
34:00 - 34:00CD, CD, CB, CB,
so, nothing with C. -
34:08 - 34:08Then, A, D, A,
D, those are in order, -
34:12 - 34:12A, B, A, B, those are in order.
So then, DB, -
34:17 - 34:17BD, so BD.
And that's the last one. -
34:21 - 34:21So, that's my potential
function, which is equal to, -
34:26 - 34:26therefore, ten,
because the cardinality of the -
34:32 - 34:32set is five.
I have five inversions, -
34:37 - 34:37OK, between the two lists.
OK, so let's just check some -
34:45 - 34:45properties of this potential
function. -
34:50 - 34:50The first one is notice that
phi of L_i is greater than or -
34:58 - 34:58equal to zero for all i.
The number of inversions might -
35:04 - 35:04be zero, but is never less than
zero. -
35:07 - 35:07OK, it's always at least zero.
So, that's one of the -
35:12 - 35:12properties that we normally have
we are dealing with potential -
35:16 - 35:16functions.
And, the other thing is, -
35:19 - 35:19well, what about phi of L0?
Is that equal to zero? -
35:23 - 35:23Well, it depends upon what list
they start with. -
35:27 - 35:27OK, so what's the initial
ordering? -
35:31 - 35:31So, it's zero if they start
with the same list. -
35:36 - 35:36Then there are no inversions.
But, they might start with -
35:42 - 35:42different lists.
We'll talk about different -
35:46 - 35:46lists later on,
but let's say for now that it's -
35:51 - 35:51zero because they start with the
same list. -
35:55 - 35:55That seems like a fair
comparison. -
36:00 - 36:00OK, so we have this potential
function now that's counting up, -
36:05 - 36:05how different are these two
lists? -
36:07 - 36:07Intuitively,
we're going to do is the more -
36:10 - 36:10differences there are in the
list, the more we are going to -
36:15 - 36:15be able to have more stored up
work than we can pay for it. -
36:19 - 36:19That's the basic idea.
So, the more that opt changes -
36:23 - 36:23the list, so it's not the same
as ours, in some sense the more -
36:27 - 36:27we are going to be in a position
as MTF to take advantage of that -
36:32 - 36:32difference in delivering up work
for us to do. -
36:37 - 36:37And we'll see how that plays
out. -
36:43 - 36:43So, let's first also make
another observation. -
36:51 - 36:51So, how much does phi change
from one transpose? -
36:59 - 36:59How much does phi change from
one transpose? -
37:08 - 37:08So, basically that's asking,
if you do a transpose, -
37:14 - 37:14what happens to the number of
inversions? -
37:19 - 37:19So, what happens when a
transposing is done? -
37:24 - 37:24What's going to happen to phi?
What's going to happen to the -
37:31 - 37:31number of inversions?
So, if I change, -
37:37 - 37:37it is less than n minus one,
yes, if n is sufficiently -
37:44 - 37:44large, yes.
OK, if I change, -
37:47 - 37:47so you can think about it here.
Suppose I switch two of these -
37:55 - 37:55elements here.
How much are things going to -
38:00 - 38:00change?
Yeah, it's basically one or -
38:06 - 38:06minus one, OK,
because a transpose creates or -
38:11 - 38:11destroys one inversion.
So, if you think about it, -
38:18 - 38:18what if I change,
for example, -
38:22 - 38:22C and A, the relationship of C
and A to everything else in the -
38:30 - 38:30list is going to stay the same.
The only thing, -
38:36 - 38:36possibly, that happens is that
if they are in the same order -
38:42 - 38:42when I transpose them,
I've created an inversion. -
38:46 - 38:46Or, if they were in the wrong
order when I transpose them, -
38:52 - 38:52now they're in the right order.
So therefore, -
38:56 - 38:56the change to the potential
function is going to be plus or -
39:01 - 39:01minus two because we're counting
twice the number of inversions. -
39:08 - 39:08OK, any questions about that?
So, transposes don't change the -
39:14 - 39:14potential very much,
just by one. -
39:18 - 39:18It either goes up by two or
down by two, just by one -
39:23 - 39:23inversion.
So now, let's take a look at -
39:27 - 39:27how these two algorithms
operate. -
39:46 - 39:46OK, so what happens when op i
accesses x in the two lists? -
40:03 - 40:03What's going to be going on?
To do that, let's define the -
40:08 - 40:08following sets.
-
40:33 - 40:33Why do I keep doing that?
-
41:54 - 41:54OK, so we're going to look at
the, when we access x, -
41:58 - 41:58we are going to look at the two
lists, and see what the -
42:02 - 42:02relationship is,
so, based on things that come -
42:05 - 42:05before and after.
So, I think a picture is very -
42:10 - 42:10helpful to understand what's
going on here. -
42:15 - 42:15OK, so let's let,
so here's L_i minus one, -
42:20 - 42:20and we have our list,
which I'll draw like this. -
42:25 - 42:25And somewhere in there,
we have x. -
42:30 - 42:30OK, and then we have L_i minus
one star, which is opt's list, -
42:41 - 42:41OK, and he's got x somewhere
else, or she. -
42:48 - 42:48OK, and so, what is this set?
This is the set of Y that come -
42:59 - 42:59before x.
So, that basically sets A and -
43:06 - 43:06B.
OK, those things that come -
43:09 - 43:09before x in both.
And, some of them, -
43:13 - 43:13the A's come before it in x,
but come after it in, -
43:19 - 43:19come before it in A,
but come after it in B. -
43:24 - 43:24OK, and similarly down here,
what's this set? -
43:40 - 43:40That's A union C,
good. -
43:43 - 43:43And this one?
Duh. -
43:45 - 43:45Yeah, it better be C union D
because I've got A union B over -
43:51 - 43:51there, and I've got x.
So that better be everything -
43:57 - 43:57else.
OK, and here is B union D. -
44:02 - 44:02OK, so those are the four sets
that we're going to care about. -
44:11 - 44:11We're actually mostly going to
care about these two sets. -
44:20 - 44:20OK, and we also know something
about the r here. -
44:27 - 44:27The position of x is going to
be the rank in L_i minus one of -
44:36 - 44:36x.
And here, this is our star. -
44:40 - 44:40It's just to the rank in L_i
minus one star of x. -
44:44 - 44:44So, we know what these ranks
are. -
44:47 - 44:47And what we're going to be
interested in is, -
44:52 - 44:52in fact, characterizing the
rank in terms of the sets. -
44:57 - 44:57OK, so what's the position of
this? -
45:01 - 45:01Well, the rank,
we have that r is equal to the -
45:10 - 45:10size of A.
What's the size of B plus one? -
45:17 - 45:17OK, and r star is equal to the
size of A plus the size of C -
45:28 - 45:28plus one.
So, let's take a look at what -
45:35 - 45:35happens when these two
algorithms do their thing. -
45:41 - 45:41So, when the access to x
occurs, we move x to the front -
45:48 - 45:48of the list.
OK, it goes right up to the -
45:54 - 45:54front.
So, how many inversions are -
45:58 - 45:58created and destroyed?
So, how many are created by -
46:04 - 46:04this?
That's probably a, -
46:06 - 46:06how many inversions are
created? -
46:33 - 46:33How many inversions are
created? -
46:35 - 46:35So, we move x to the front.
So what we are concerned about -
46:40 - 46:40is that anything that was in one
of these sets that came, -
46:44 - 46:44where it's going to change in
order versus down here. -
46:48 - 46:48So, if I look in B,
well, let's take a look at A. -
46:52 - 46:52OK, so A, those are the things
that are in the same order in -
46:56 - 46:56both.
So, everything that's in A, -
46:58 - 46:58when I move x to the front,
each thing in A is going to -
47:03 - 47:03count for one more inversion.
Does everybody see that? -
47:10 - 47:10So, I create a cardinality of A
inversions. -
47:16 - 47:16And, we are going to destroy,
well, everything in B came -
47:24 - 47:24before x in this list,
and after x in this. -
47:31 - 47:31But after we move x,
they're in the right order. -
47:37 - 47:37So, I'm going to destroy B
inversions, cardinality of B -
47:43 - 47:43inversions.
OK, so that's what happens we -
47:48 - 47:48operate with move to front.
We destroy. -
47:53 - 47:53We create A inversions and
destroy B inversions, -
47:58 - 47:58OK, by doing this movement.
OK, now, let's take a look at -
48:06 - 48:06what opt does.
So, each transpose, -
48:10 - 48:10we don't know what opt does.
He might move x this way or -
48:16 - 48:16that way.
We don't know. -
48:19 - 48:19But each transpose,
I opt, well, -
48:22 - 48:22we're going to be interested in
how many inversions it creates, -
48:29 - 48:29and we already argued that it's
going to create, -
48:34 - 48:34at most, one inversion per
transpose. -
48:40 - 48:40So, he can go and create more
inversions, OK? -
48:47 - 48:47So, let me write it over here.
Thus -- -
49:05 - 49:05-- the change in potential is
going to be, at most, -
49:16 - 49:16twice, A minus B plus t_i.
OK, so t_i, remember, -
49:27 - 49:27was the number of transposes
that opt does on the i'th step -
49:40 - 49:40for the i'th operation.
OK, so we're going to create -
49:49 - 49:49the change in potential is,
at most, twice this function. -
49:56 - 49:56So, we are now going to look to
see how we use this fact, -
50:08 - 50:08and these two facts,
this fact and this fact, -
50:17 - 50:17OK, to show that opt can't be
much better than MTF. -
50:27 - 50:27OK, good.
The way we are going to do that -
50:33 - 50:33is look at the amortized cost of
the I'th operation. -
50:38 - 50:38OK, what's MTF's amortized
cost? -
50:42 - 50:42OK, and then we'll make the
argument, which is the one you -
50:47 - 50:47always make that the amortized
cost upper bound the true costs, -
50:54 - 50:54OK?
But the amortized cost is going -
50:57 - 50:57to be easier to calculate.
OK, so amortized cost is just C -
51:04 - 51:04hat, actually,
let me make sure I have lots of -
51:09 - 51:09room here on the right,
c_i hat, which is equal to the -
51:15 - 51:15true cost plus the change in
potential. -
51:19 - 51:19OK, that's just the definition
of amortized cost when given -
51:25 - 51:25potential functions,
OK? -
51:29 - 51:29So, what is the cost of
operation i, OK, -
51:37 - 51:37in this context here?
OK, we accessed x there. -
51:45 - 51:45What's the cost of operation i?
Two times the rank of x, -
51:56 - 51:56which is 2r.
OK, so 2r, that part of it. -
52:05 - 52:05OK, well, we have an upper
bound on the change in -
52:13 - 52:13potential.
That's this. -
52:17 - 52:17OK, so that's two times the
cardinality of A minus -
52:25 - 52:25cardinality of B plus t_i.
OK, everybody with me? -
52:34 - 52:34Yeah?
OK, I see lots of nods. -
52:37 - 52:37That's good.
OK, that's equal to 2r plus two -
52:42 - 52:42of size of A minus,
OK, I want to plug in for B, -
52:48 - 52:48and it turns out very nicely.
I have an equation involving A, -
52:56 - 52:56B, and r.
So, I get rid of the variable -
53:00 - 53:00size of B by just plugging that
in. -
53:06 - 53:06OK, and so what do I plug in
here? -
53:11 - 53:11What's B equal to?
Yeah, r minus size of A minus -
53:19 - 53:19one.
I wrote it the other way. -
53:24 - 53:24OK, and then plus t_i.
OK, and this is since r is A -
53:32 - 53:32plus B plus one.
OK, everybody with me still? -
53:38 - 53:38I'm just doing algebra.
We've got to make sure we do -
53:45 - 53:45the algebra right.
OK, so that's equal to, -
53:50 - 53:50let's just multiply all this
out now and get 2r plus, -
53:58 - 53:58I have 2A here minus A.
So, that's 4A. -
54:04 - 54:04And then, two times minus r is
minus 2r. -
54:09 - 54:09Two times minus one is minus
two. -
54:13 - 54:13Oh, but it's minus-minus two,
so it's plus two. -
54:19 - 54:19OK, and then I have 2t_i.
So, that's just algebra. -
54:25 - 54:25OK, so that's not bad.
We've just got rid of another -
54:31 - 54:31variable.
What variable did we get rid -
54:38 - 54:38of?
r. -
54:38 - 54:38It didn't matter what the rank
was as long as I knew what the -
54:47 - 54:47number of inversions was here.
OK, so that's now equal to 4A -
54:54 - 54:54plus two plus 2t_i.
And, that's less than or equal -
55:02 - 55:02to, I claim, four times r star
plus t_i using our other fact. -
55:11 - 55:11Since r star is equal to the
size of A plus the size of C, -
55:20 - 55:20plus one, then that's greater
than or equal to the size of A -
55:29 - 55:29plus one.
OK, if I look at this, -
55:35 - 55:35I'm basically looking at A.
The fact that A, -
55:43 - 55:43what did I do here?
If r star is greater than or -
55:51 - 55:51equal to A plus one,
right, so therefore, -
55:59 - 55:59A plus one, good.
Yeah, so this is basically less -
56:05 - 56:05than or equal to 4A plus four,
which is four times A plus one. -
56:10 - 56:10I probably should have put in
another algebra step here, -
56:14 - 56:14OK, because if I can't verify
it like this, -
56:17 - 56:17then I get nervous.
This is basically, -
56:20 - 56:20at most, 4A plus four.
That's four times A plus one, -
56:23 - 56:23and A plus one is less than or
equal to r star. -
56:27 - 56:27And then, 2t_i is,
at most, 4TI. -
56:30 - 56:30So, I've got this.
Does everybody see where that -
56:43 - 56:43came from?
But what is r star plus t_i? -
56:54 - 56:54What is r star plus t_i?
What is it? -
57:05 - 57:05It's c_i star.
That's just c_i star. -
57:13 - 57:13So, the amortized cost of i'th
operation is, -
57:24 - 57:24at most, four times opt's cost.
OK, that's pretty remarkable. -
57:37 - 57:37OK, so amortized cost of the
i'th operation is just four -
57:44 - 57:44times opt's cost.
Now, of course, -
57:48 - 57:48we have to now go through and
analyze the total cost. -
57:55 - 57:55But this is now the routine way
that we analyze things with a -
58:03 - 58:03potential function.
So, the costs of MTF of S is -
58:13 - 58:13just the summation of the
individual costs, -
58:22 - 58:22OK, by definition.
And that is just the sum, -
58:31 - 58:31i equals one,
to S of the amortized cost -
58:39 - 58:39plus, minus the change in
potential. -
58:45 - 58:45OK, did I do this right?
No, I put the parentheses in -
58:56 - 58:56the wrong place.
Now I've got it right. -
59:02 - 59:02Good.
I just missed a parenthesis. -
59:05 - 59:05OK, so this is,
so in the past what I did was I -
59:08 - 59:08expressed the amortized cost as
being equal to c_i plus the -
59:13 - 59:13change in potential.
I'm just throwing these two -
59:17 - 59:17terms over to the other side and
saying, what's the true cost in -
59:22 - 59:22terms of the amortized cost?
OK, so I get c hat of i plus -
59:27 - 59:27phi sub L_i minus one minus phi
of L_i, OK, by making that -
59:32 - 59:32substitution.
OK, that's less than or equal -
59:37 - 59:37to since this is linear.
Well, I know what the sum of -
59:42 - 59:42the amortized cost is.
It's, at most, -
59:45 - 59:454c_i star.
So, the sum of them is, -
59:48 - 59:48at most, to that sum,
I equals one to S of 4c_i star. -
59:53 - 59:53And then, as happens in all
these things, -
59:57 - 59:57you get a telescope with these
terms. -
60:01 - 60:01Every term is added in once and
subtracted out once, -
60:09 - 60:09except for the ones at the
limit. -
60:14 - 60:14So, I get plus phi of L_0 minus
phi of L sub cardinality of S. -
60:23 - 60:23And now, this term is zero.
And this term is greater than -
60:31 - 60:31or equal to zero.
OK, so therefore this whole -
60:39 - 60:39thing is less than or equal to,
well, what's that? -
60:48 - 60:48That's just four times opt's
cost. -
60:53 - 60:53And so, we're four competitive.
OK, this is amazing, -
61:01 - 61:01I think.
It's not that hard, -
61:03 - 61:03OK, but it's quite amazing that
just by doing a simple -
61:07 - 61:07heuristic, you're nearly as good
as any omniscient algorithm -
61:12 - 61:12could possibly be.
OK, you're nearly as good. -
61:15 - 61:15And, in fact,
in practice, -
61:17 - 61:17this is a great heuristic.
So, if ever you have things -
61:21 - 61:21like a hash table that you're
actually seeing by chaining, -
61:26 - 61:26OK, often it's the case that if
when you access the elements, -
61:31 - 61:31you're just bringing them up to
the front of the list if it's an -
61:36 - 61:36unsorted list that you've put
them into, just bring them up to -
61:41 - 61:41the front.
You can easily save 30 to 40% -
61:45 - 61:45in run time for the accessing to
the hash table because you will -
61:51 - 61:51be much more likely to find the
elements inside. -
61:55 - 61:55Of course, it depends on the
distribution and so forth, -
61:59 - 61:59for empirical matters,
but the point is that you are -
62:04 - 62:04not going to be too far off from
the ordering that an optimal -
62:09 - 62:09algorithm would do,
optimal off-line algorithm: -
62:13 - 62:13I mean, amazing.
OK: optimal off-line. -
62:17 - 62:17Now, it turns out that in the
reading that we assigned, -
62:22 - 62:22so, we assigned you Sleator and
Tarjan's original paper. -
62:28 - 62:28In that reading,
they actually have a slightly -
62:39 - 62:39different model where they count
transposes that move in excess -
62:55 - 62:55to element x towards the front
of the list as free. -
63:09 - 63:09OK, so, and this basically
models, so here's the idea is if -
63:15 - 63:15I actually have a linked list,
and when I chase down, -
63:20 - 63:20once I find x,
I can actually move x up to the -
63:24 - 63:24front with just a constant
number of pointer operations to -
63:29 - 63:29splice it out and put it up to
the front. -
63:34 - 63:34I don't actually have to
transpose all way back down. -
63:37 - 63:37OK, so that's kind of the model
that they use, -
63:39 - 63:39which is a more realistic
model. -
63:41 - 63:41OK, I presented this argument
because it's a little bit -
63:44 - 63:44simpler.
OK, and the model is a little -
63:46 - 63:46bit simpler.
But in our model, -
63:47 - 63:47they have, when you access
something, you want to bring it -
63:50 - 63:50up to the front,
or anything that you happen to -
63:53 - 63:53go across during that time,
you could bring up to the front -
63:56 - 63:56essentially for free.
This model is the splicing in, -
64:06 - 64:06splicing x in and out of L in
constant time. -
64:17 - 64:17Then, MTF is,
it turns out, -
64:24 - 64:24too competitive.
It's within a factor of two of -
64:32 - 64:32optimal, OK, if you use that.
And that's actually a good -
64:36 - 64:36exercise to work through.
You could also go read about it -
64:41 - 64:41in the reading to understand
this better, to look to see -
64:45 - 64:45where you would use those
things. -
64:47 - 64:47You have to have another term
representing the number of, -
64:52 - 64:52quote, "free" transposes.
But it turns out that all the -
64:56 - 64:56math works out pretty much the
same. -
64:58 - 64:58OK, let's see,
another thing I promised you -
65:02 - 65:02is, what if, to look at the
case, what if they don't start -
65:06 - 65:06with the same lists?
OK, what if the two lists are -
65:14 - 65:14different when they start?
Then, the potential function at -
65:25 - 65:25the beginning might be as big as
what? -
65:33 - 65:33How big are the potential
function start out as if the -
65:38 - 65:38lists are different?
So, suppose we're starting out, -
65:43 - 65:43you have a list,
and opt says, -
65:46 - 65:46OK, I'm going to start out by
ordering my list according to -
65:51 - 65:51the sequence that I want to use,
OK, and MTF orders it according -
65:57 - 65:57to the sequence it must use.
What list is opt going to start -
66:03 - 66:03out with as an adversary?
Yeah, it's going to pick the -
66:10 - 66:10reverse of what ever MTF starts
out with, right, -
66:16 - 66:16because then,
if he picks the reverse, -
66:22 - 66:22what's the number of
inversions? -
66:26 - 66:26It's how many inversions in a
reverse ordered list? -
66:34 - 66:34Yeah, n choose two,
OK. -
66:36 - 66:36Is it n choose two,
or n minus one choose two? -
66:42 - 66:42n minus one choose two,
OK, inversions that you get -
66:47 - 66:47because it's basically a
triangular number when you add -
66:53 - 66:53them up.
But in any case, -
66:56 - 66:56it's order n^2,
worst case. -
67:00 - 67:00So, what does that do to our
analysis here? -
67:07 - 67:07It says that the cost of MTF of
S is going to be, -
67:15 - 67:15well, this is no longer zero.
This is now n^2. -
67:23 - 67:23OK, so we get that costs of MTF
of S is, at most, -
67:31 - 67:31four times opt's thing plus
order n^2, OK? -
67:39 - 67:39And, if we look at the
definition, did we erase it -
67:51 - 67:51already?
OK, this is still for -
67:59 - 67:59competitive, OK,
since n^2 is constant as the -
68:10 - 68:10size of S goes to infinity.
This is, once again, -
68:19 - 68:19sort of your notion of,
what does it mean to be a -
68:22 - 68:22constant?
OK, so as the size of the list -
68:25 - 68:25gets bigger, all we're doing is
accessing whatever that number, -
68:29 - 68:29n, is of elements.
That number doesn't grow with -
68:32 - 68:32the problem size,
OK, even if it starts out as -
68:35 - 68:35some variable number,
n. -
68:37 - 68:37OK, it doesn't grow with the
problem size. -
68:43 - 68:43We still end up being
competitive. -
68:47 - 68:47This is just the k that was in
that definition of -
68:54 - 68:54competitiveness.
OK, any questions? -
68:58 - 68:58Yeah?
Well, so you could change the -
69:03 - 69:03cost model a little bit.
Yeah. -
69:06 - 69:06And that's a good one to work
out. -
69:09 - 69:09But if you say the cost of
transposing, so, -
69:13 - 69:13the cost of transposing is
probably moving two pointers, -
69:18 - 69:18approximately.
No, one, three pointers. -
69:22 - 69:22So, suppose that the cost of,
wow, that's a good exercise, -
69:27 - 69:27OK?
Suppose the cost was three -
69:30 - 69:30times to do a transpose,
was three times the cost of -
69:35 - 69:35doing an access,
of following a pointer. -
69:40 - 69:40OK, how would that change the
number here? -
69:44 - 69:44OK, good exercise,
great exercise. -
69:47 - 69:47OK, hmm, good final question.
OK, yes, it will affect the -
69:52 - 69:52constant here just as when we do
the free transpose, -
69:57 - 69:57when we move things towards the
front, that we consider those as -
70:02 - 70:02free, OK.
Those operations end up -
70:06 - 70:06reducing the constant as well.
OK, but the point is that this -
70:12 - 70:12constant is independent of the
constant having to do with the -
70:17 - 70:17number of elements in the list.
So that's a different constant. -
70:23 - 70:23So, this is a constant.
OK, and so as with a lot of -
70:27 - 70:27these things,
there's two things. -
70:31 - 70:31One is, there's the theory.
So, theory here backs up -
70:35 - 70:35practice.
OK, those practitioners knew -
70:37 - 70:37what they were doing,
OK, without knowing what they -
70:40 - 70:40were doing.
OK, so that's really good. -
70:43 - 70:43OK, and we have a deeper
understanding that's led to, -
70:47 - 70:47as I say, many algorithms for
things like, the important ones -
70:51 - 70:51are like paging.
So, what's the comment page -
70:54 - 70:54replacement policy that people
study, people have at most -
70:58 - 70:58operating systems?
Who's done 6.033 or something? -
71:02 - 71:02Yeah, it's Least Recently Used,
LRU. -
71:04 - 71:04People have heard of that,
OK. -
71:06 - 71:06So, you can analyze LRU
competitive, and show that LRU -
71:10 - 71:10is actually competitive with
optimal page replacement under -
71:13 - 71:13certain assumptions.
OK, and there are also other -
71:16 - 71:16things.
Like, people do random -
71:18 - 71:18replacement algorithms,
and there are a whole bunch of -
71:22 - 71:22other kinds of things that can
be analyzed with the competitive -
71:26 - 71:26analysis framework.
OK, so it's very cool stuff. -
71:29 - 71:29And, we are going to see more
in recitation on Friday, -
71:32 - 71:32see a couple of other really
good problems that are maybe a -
71:36 - 71:36little bit easier than this one,
OK, definitely easier than this -
71:40 - 71:40one.
OK, they give you hopefully -
71:45 - 71:45some more intuition about
competitive analysis. -
71:51 - 71:51I also want to warn you about
next week's problem set. -
71:58 - 71:58So, next week's problem set has
a programming assignment on it. -
72:07 - 72:07OK, and the programming
assignment is mandatory, -
72:11 - 72:11meaning, well,
all the problem sets are -
72:14 - 72:14mandatory as you know,
but if you decide not to do a -
72:18 - 72:18problem there's a little bit of
a penalty and then the penalties -
72:23 - 72:23scale dramatically as you stop
doing problem sets. -
72:26 - 72:26But this one is
mandatory-mandatory. -
72:30 - 72:30OK, you don't pass the class.
You'll get an incomplete if you -
72:34 - 72:34do not do this programming
assignment. -
72:36 - 72:36Now, I know that some people
are less practiced with -
72:39 - 72:39programming.
And so, what I encourage you to -
72:42 - 72:42do over the weekend is spent a
few minutes and work on your -
72:46 - 72:46programming skills if you're not
up to snuff in programming. -
72:50 - 72:50It's not going to be a long
assignment, but if you don't -
72:53 - 72:53know how to read a file and
write out a file, -
72:56 - 72:56and be able to write a dozen
lines of code, -
72:59 - 72:59OK, if you are weak on that,
this weekend would be a good -
73:02 - 73:02idea to practice reading in a
text file. -
73:06 - 73:06It's going to be a text file.
Read it in a text file, -
73:10 - 73:10decent manipulations,
write out a text file, -
73:13 - 73:13OK?
So, I don't want people to get -
73:15 - 73:15caught with this being mandatory
and that not have time to finish -
73:19 - 73:19it because they are busy trying
to learn how to program in short -
73:24 - 73:24order.
I know some people take this -
73:26 - 73:26course without quite getting all
the programming prerequisites. -
73:31 - 73:31Here's where you need it.
Question? -
73:34 - 73:34No language limitations.
Pick your language. -
73:38 - 73:38The answer will be written in,
I think, Java, -
73:42 - 73:42and Eric has graciously
volunteered to use Python for -
73:46 - 73:46his solution to this problem.
We'll see whether he lives up -
73:51 - 73:51to that promise.
You did already? -
73:54 - 73:54OK, and George wrote the Java
solution. -
73:57 - 73:57And so, C is fine.
Matlab is fine. -
74:00 - 74:00OK, what else is fine?
Anything is fine. -
74:05 - 74:05Scheme is fine.
Scheme is fine. -
74:08 - 74:08Scheme is great.
OK, so any such things will be -
74:12 - 74:12just fine.
So, we don't care what language -
74:15 - 74:15you program in,
but you will have to do -
74:18 - 74:18programming to solve this
problem. -
74:21 - 74:21OK, so thanks very much.
See you next week.
- Title:
- Lec 14 | MIT 6.046J / 18.410J Introduction to Algorithms (SMA 5503), Fall 2005
- Description:
-
Lecture 14: Competitive Analysis: Self-organizing Lists
View the complete course at: http://ocw.mit.edu/6-046JF05
License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
- Video Language:
- English
- Duration:
- 01:14:28