Enter your email below to watch this video
Aaron is a natural teacher and this course is the best introduction to Postgres I have come across. Lessons are easy to follow and he recommends some great tools for working with Postgres.Joel Drumgoole
Shorten dev cycles with branching and zero-downtime schema migrations.
Up next is fractional numbers with variable precision that are inexact, quite fast.
Remember, integers, whole numbers, perfectly precise, very, very fast. Numeric supports fractions, very, very precise, very slow.
Floating point numbers are all the way on the right hand side, supports fractions, inexact, very, very fast. Let me show you a few of the ways that you can declare it then I'll try to prove to you that it's both fast somewhat inaccurate.
The example that I'm working with here is some sort of IOT, Internet of Things sensor reading table, you have a sensor name then a reading. I feel like this is actually a pretty good example. When I say that floats are approximations, that doesn't mean that they're bad, that doesn't mean that they're wrong, it just means it's an approximation, which is oftentimes good enough, not when dealing with financial data. When dealing with temperature or barometric pressure, whatever that is, a pretty good approximation is good enough. You have a few options here. You could say real, you could say double, for some reason you gotta type double precision.
Let's talk about these. If you hop down here, a real a double precision are both floating point approximations. A real is a 4 byte it goes from 1, how do you do this? 1E-37 I think is the correct notation, all the way to 1E37, just fundamentally a huge, a huge, huge range. You have at least 6 digits after the decimal point of precision.
If we hop down here, you'll see that it is double, which means it is 8 bytes. I'm gonna go ahead add the word bytes up here, we remember it's 8 bytes down here, this is just a magnificent range, 1E-307 all the way to 1E+308 with at least 15, 15 digits after the decimal point of precision. Those are the two types of approximations that you have.
There are a bunch of aliases that we'll look at, you need to remember a real is 4 bytes a double, double precision is 8 bytes. Depending on what range you need or potentially what precision you need after the decimal, that will dictate which of these two columns you should use. Just like integers we have, you have float 4, which is legit. Then if you were to hop over here you were to do D, what did we name that, d real_example, you would see that it was transformed into real. We can drop table, real_example, another option that you have is float 1 through 24.
In other databases you can pass through this value. In fact, I think in MySQL it's also ignored the same way that it's ignored here in Postgres. 1 through 24 is going to lead to a real, 25 through, I think, I actually don't remember, maybe 53 leads to a double. If we were to put, let's put just we do reading_49 we were to make it a float(49) we were to run that then hop back over here, you'll see it has transformed into a double precision. You also have float8 in the same way that you have int 2, int 4 int 8.
All right, enough with the aliases, I just tell you those aliases that when you see them in the wild, you have a place in your mind to slot that information, typically you're gonna see real double precision, who knows, maybe you'll see float8 somewhere.
We're actually gonna look at float8 example right here because I want to show you, I want to try to show you that the floats can be imprecise when you're doing mathematical operations on them they're quite fast when compared to numeric. I promised you that they are fast that they can lose precision, let's see if we can prove it. Over here we have 7.0 cast to a float8. You could cast this to a double precision, float8 is nice compact, we're gonna cast it to a float8 then multiply it times 2.0 divided by 10.0. If we run that, you'll see we get 1.400000001. I mean, is it, is it right? Not really. Is it close? Yeah, super close. If you were to compare this to 1.4, it's just, it's gonna say, it's gonna say no, that's just not the case. When you are doing mathematical operations on floats or with floats, you do have to account for, you do have to account for that in precision.
Instead of doing a strict equality, you might have to do, you might have to subtract 1.4. Let's say is it equal to 1.4? No, is it less than, you know, 0.001 off? Ah, okay, we're gonna count that as right. This is a function of floating point numbers. This is not a problem with Postgres, this is the way that it's designed. This is the way floating point numbers operate. We're trying to represent, we're trying to represent something that is unrepresentable in the underlying storage you end up with some weird stuff. Now that is kind of right, kind of wrong.
If you were to cast it to numeric, super right. That's what I'm talking about when I'm talking about precision. Floats, pretty good, numeric, definitely good. Now, what about speed? Don't freak out, there's some stuff here I'm gonna show you. I'm gonna show you some of the stuff. If we look at this, you'll see we are using this cool, very cool in fact, generate_series function that generates an entire table.
Let me just show you what that is. Select * from generate_series num, instead of whatever that is, what is that, 20 million? Yeah, instead of 20 million, well let's just do 20. What this does is it generates a series, they should have named it that. It generates a series between 1 20. What we're doing here without having to create, you know, a bunch of tables or anything like that, we're gonna cast this to a float then divide it then cast it to a float. We're just doing a little bit of math just for giggles then we're gonna cast this to a numeric, divide it cast it to a numeric. What we're trying to do is across 20, yeah, 20 million rows, we're trying to figure out is there a noticeable difference?
Because I keep telling you that there is. We're just gonna run this for a float we get 2.3, not a very scientific benchmark, hopefully it's big enough that we'll see a pretty big difference. Hey, I feel good about that, 4.5. That is twice as slow or twice as fast, depending on what your frame of reference is. If we run the float8 again, 2.3, we run the numeric again, we see that it is 4.6.
Again, it sure does look pretty accurate to me, you see it's a lot slower. This is almost rounding out our discussion about numbers. We're gonna do money in the next one, we still have this idea in our head between integer, numeric floating point. Perfectly accurate, perfectly accurate, approximation. Really fast, kind of slow, really fast. Pick the one that works for you.
Hopefully we have proven that all of those underlying fundamental assumptions are accurate now you are well equipped to pick the one that matches your data your use case.