[MUSIC].
So far in this section, we've been
focusing on integer representations, and
we've looked at both unsigned and signed
integers.
Now, let's turn our attention to
fractional binary numbers, on our way to,
to talk about floating point values.
So, let's start with a typical fractional
binary number.
Here, we have one you'll notice that it's
a little different than the number's
we've dealt with before.
It has a binary point.
Very analogous to a decimal point.
in this case just like in decimal
numbers, remember, we had the 1s column
and the 2s column and the 4s column and
so on?
Now, we have the halves column, the
quarters column, and the eighths column
just as we would in a decimal value
number, we would have tenths and
hundredths and thousandths.
So, this number is an 8 Plus a 2 plus a
1.
Those are the, that's the integer side,
the left side of the binary point.
So that's the number 11.
And then, we have a half and an eighth.
Not a fourth, because that's a 0.
So in this case, this number comes out to
be 11.625, in decimal, okay?
And, we're in, in fact, going to
interpret fractional binary numbers the
same way we do with fractional decimal
numbers.
Okay?
We're going to do exactly the same
things.
so here we see an, an extended version of
the calculation that we just did.
again the, integer values and the
fractional values.
for the numbers both to the left and to
the right of the binary point.
Okay?
And we can have a summation expression
that adds all that up.
From minus j to i we can add all those
values as we just did in our little
example.
Let's take a look, at, at a few more
examples of these fractional binary
numbers.
let's start with 5 and 3 4ths.
How would we represent that?
Well, the 5 that's going to be to the
left of the binary point is going to be
pretty easy.
That's just 1 0 1.
then the 3 4ths on the right-hand side of
the binary point is equivalent to a half
plus a fourth.
So those are going to be represented as
one, one in the halves and quarters
column.
So we'd expect that representation to
look like this, a 1 0 1 for the 5, a
binary point, then one half plus 1 4th
for the 3 4ths.
Okay, for 2 and 7 8ths, it's a very
similar thing.
The 2 would be 1 0 on the left and the 7
8ths would now be a half plus a fourth
plus an eighth or point 111 and that is
the representation of 2 and 7 8ths.
For 63 64ths there is no integer part to
the number, so we have a 0 to the left of
the binary point.
And then a half plus a fourth plus an
eighth plus a sixteenth, all the way down
to get us to 63 64ths, and that's
going to be the equivalent of six 1s.
so 0.111111.
Now some observations.
remember that shifting that we were doing
with binary numbers?
well, we can do it with fixed point
representations as well of these
fractional binary numbers.
If we divide by two, it's like moving
that binary point 1 over to the left.
And multiplying by 2 is moving the binary
point 1 over to the right.
Okay?
one other observation to note is that
numbers that are of this form as with
these leading trail of 1s are just below
the value 1.0, because if we had a little
bit more, then they would be just equal
to 1.
So, if we added one half plus a fourth,
plus an eighth, plus a sixteenth, and so
on infinitely far down we would approach
1, but never quite get there.
So often, we'll see a notation like this
to avoid writing the long, long string of
1s.
We just say 1 minus epsilon, minus a
smidgen to to indicate that.
Okay.
So what is some of the limitations of
binary numbers and the representable
values.
Remember we had limitations on integers,
we could only be so large and so negative
before we ran out of room in our bit
representations.
Okay, so we could only, first of all,
represent numbers exactly if they can be
written in the form x times 2 to some
power of y.
other rational numbers are going to have
repeating bit representations.
So, for example remember, like 1 3rd in
decimal is 0.33333, but it never stops,
it goes on forever.
Well, we have an equivalent situation
with binary numbers.
So take a look at the representation for
1 3rd in fact, 1 3rd is actually going to
be this bit pattern.
And you'll notice that that 010101 is a
repeating pattern, and we'll use square
brackets to represent that repeating
pattern.
there's other values, numbers like that
in in binary 1 5th, also has a repeating
bit pattern that repeats 0011 on forever,
and ever.
And 1 10th, you notice is also has that
same repeating pattern as 1 5th did,
because of course, 1 10th is just a half
of a fifth.
It's just that same bit pattern shifted
over by 1.
Okay, so it's going to start it's
going to repeat the same way.
Alright, so, fixed point representation
is when we decided to represent factional
binary by picking a place for the binary
point and fixing it there, always putting
it in that location.
So, we have to decide where to put it.
Well, if we're looking at an 8-bit fixed
point numbers, again, just to keep the
example small.
we can decide to put the, the binary
point, for example, so that it has three
bits to the right and five bits to the
left.
Well, what does that imply?
That implies that we can represent up to
the number 31 on the left, so we can get
as high as 31.111.
So that would be 31 and 7 8ths would be
the largest number we could represent.
Of course, we could have chosen to put
the binary point elsewhere.
Fixed it at a different place with only
three bits over on the left and five bits
to the right.
well now, this only let's us go up to
seven point something.
seven and what well, there's five bits
here, so we can represent up to 3130
seconds.
And so, the largest number we can have
here is 7 and 31 37th, 30 seconds.
While here, we had the numbers up to 31
and 7 8ths.
Okay?
So, what is the difference between these?
How would we choose what to use?
Well one question we have to ask
ourselves is how large number do we need
to be able to represent.
and the other is how much precision do we
want the numbers to have.
In other words, what is the smallest
fraction null difference that we can
represent.
Okay.
So, the range is that range of numbers,
the precision is how small of a fraction
and with fixed point representations we
have this trade off.
The more range we have, the less
precision we have, the less bits we have
on the other side of the binary point and
vice versa if we have less range, then we
get more precision.
So, that's the reason that we don't end
up using fixed point representations,
because of these very strong cons against
it, it's really hard to pick a good place
for the fixed point to be.
All right.
Sometimes you end up wanting range.
Sometimes you end up needing more
precision.
And the more you have of one, the less
you have of the other.
You can't get the best of both.
so that's why we're going to turn our
attention to what are called floating
point notations, where we don't fix the
binary point, but allow it to float as
needed.
Wyszukiwarka
Podobne podstrony:
Glossary for binary numbersoption extended valid elementsWykład 05 Opadanie i fluidyzacjaPrezentacja MG 05 20122011 05 P05 2ei 05 08 s029ei 05 s052option custom undo redowięcej podobnych podstron