(07-18-2023, 07:30 PM)SagaraS Wrote: I can spin this any way I want. With _Byte, INTEGER and LONG and others 8 bytes are reserved.
What you're seeing here is your OS's basic data ruleset at work. You'll see this type of thing all the time in TYPE usage and with DECLARE LIBRARY.
Let's take a step back and try and think about how a hard drive works for a moment. You have such things as disk tracks, sectors, and clusters and all -- but what are they??
Show Content
SpoilerTRACKS: Each platter is broken into thousands of tightly packed concentric circles, known as tracks. These tracks resemble the structure of annual rings of a tree. All the information stored on the hard disk is recorded in tracks. Starting from zero at the outer side of the platter, the number of tracks goes on increasing to the inner side. Each track can hold a large amount of data counting to thousands of bytes.
SECTORS:Each track is further broken down into smaller units called sectors. As sector is the basic unit of data storage on a hard disk. A single track typically can have thousands of sectors and each sector can hold more than 512 bytes of data. A few additional bytes are required for control structures and error detection and correction.
CLUSTERS:Sectors are often grouped together to form Clusters.
So basically, your hard drive's SECTOR setting is the smallest chunk of data that it can read or write to at a time. If the sector size is 512 bytes, and you write a text file that is only "Hello World" and a total of 11 bytes, then that 11 byte file will still use a multiple of 512 bytes for storage. (In this case, it'd use 512 bytes. A 513 byte data file actually uses 1024 bytes of storage, as it *HAS* to align to the smallest sector size of 512 bytes.)
So why do we see such variances in disk sector sizes? Some drives have 128 bytes for a sector. Some have 4096. Some with larger and smaller values.. WHY?? What's the difference?
Smaller sector sizes pack data tighter together, while larger sector sizes read faster. <-- That's the overall truth of the matter.
If you have a drive with a 64 byte sector size, you can write 10 "Hello World" files and use 640 bytes of drive space -- each file is at the minimum of the sector size. Now, compare that to a drive with a 4096 sector size, those exact same 10 "Hello World" files will use 40,960 bytes of drive space. Small sector sizes pack data much more efficiently!!
On the other hand, you have a file which is 640 bytes in size. With the 64 byte sector size, that drive has to read 64 bytes sectors, 10 different times, and move all that data into memory; whereas that 4096 sector size drive makes 1 simple pass from the drive and is done with it. That larger drive is much faster!!
Now, with that basic concept in mind with hard drives, the way your OS handles memory isn't much different.
Generally speaking, data for your programs is going to be written in the size of your OS's registers. For 32-bit apps, that's usually on a 4-byte breakpoint. For 64-bit apps, that's usually on an 8-byte break point. It's why when you write a DECLARE LIBRARY routine for a microsoft library, you have to add padding for the OS.
TYPE foo
a AS INTEGER
p AS PADDING * 6 'for 64-bit systems. For a 32-bit system, this would need 2-bytes of padding.
b AS _INTEGER64
END TYPE
Your OS wants to implement data structures so that it reads and works with a register of data in a single pass. It's faster and more efficient, rather than having to read a portion of a register, work with it, then write a portion back.
It's all about what's most efficient, in general cases, for the OS to read/write data. (Note that things like #pragma pack and such alters such behavior.) Generally speaking, the OS is going to write data in 4-bytes (sectors if you want) on a 32-bit App and in 8-byte sectors on a 64-bit App.
>> NOTICE I SAID APP AND NOT OS!! 64-bit OSes will still pack 32-bit programs into 4-byte sectors, rather than 8-byte sectors, so everything defaults for compatibility reasons. <<
So what you're seeing is the clean positioning of data along the 8-byte boundry of your 64-bit apps -- and that's what you'll generally find on a fresh start of a new program. As I mentioned however, the data isn't guaranteed to always remain in such a neatly organized order for you. Let's think about the following program flow for a moment:
We start a program with 10 integers in use. Those use the first 80 bytes of memory.
11111111222222223333333344444444555555556666666677777777888888889999999900000000 <-- this is basically the memory map of those 10 integers each using 8 bytes of memory. 1 is the first integer. 2 is the second integer. 3 is the third integer, and so on...
We then start a routine which resizes that second integer to become a 8-element array. It can't stay in the original 8-bytes of memory that it used -- it needs 16-bytes to hold that array now.
11111111XXXXXXXX33333333444444445555555566666666777777778888888899999999000000002222222222222222 <-- We now have a gap of 8 bytes of freed memory which isn't in use any longer. variable 1 comes first, then there's a gap, then there's variable 3, and variable 2 is now in memory after variable 10 (represented by 0).
And now we add a new variable, which is variable #11 into the program.
11111111AAAAAAAA33333333444444445555555566666666777777778888888899999999000000002222222222222222 <-- The As are representive of the 11th variable. In this case, the first variable comes first in memory, followed by the 11th variable, with the 3rd through 10th next, and the 2nd variable using up the last bytes in memory.
The more a program runs, the more the memory tends to get shuffled over time. Don't assume that it's always going to be a singular block of memory from variable 1 to variable N. If you do, you'll end up in a world of problems eventually as you end up corrupting values you never intended to interact with.