An Abstract Memory Store
In my previous post I went over one possible scheme for an Object structure suitable for implementing dynamic typing in an interpreter. That structure can be used "as-is", letting your system handle all the storage and addressing, and depending on what types of data and what you want to do with that data, that may be enough. It has been my experience however, that having a centralized "memory store" where all objects are stored "at the same level", and variable names are bound to an address for the object instead of the object its self.
At first glance this might sound redundant - I mean, that IS pretty much how computer memory works anyway, so why are we duplicating it? Well, as it turns out there happens to be a multitude of reasons for abstracting system memory, not the least of which is that object life time becomes much easier to track. This also has the added benefit that it greatly simplifies the implementation of a garbage collector if that is one of your goals. With that being said, this is a very light weight abstraction without any real performance penalties for its application.
Cells And Addresses
To keep things simply, we'll be using the classic view of memory as rows of cells, each having a unique address. An array of the Object type described in my previous post on dynamic typing will serve as our memory cells. From here on I will to refer to this data structure as the 'memstore'. Because we want to be able to re-use cells, lest we run out of memory, we will maintain two arrays: the previously mentioned array of Objects and an array of integers that will serve as a free list. The freed list will store the address of cells previously allocated as they get freed. The variable "nextFreeAddress" will be used for the initial allocation of the arrays, until that job gets taken over by the free list. The freedCount variable is used to index into the freedList. Once the "nextFreeAddress" value has been exhausted, we will use this index to get Addresses from the free list.
class MemStore {
private:
Object* memstore[MAX_MEM_STORE];
int nextFreeAddress;
int freedCells[MAX_MEM_STORE];
int freedCount;
public:
MemStore();
MemStore(const MemStore& ms);
~MemStore();
void free(int addr);
void store(int adrr, Object* obj);
int storeAtNextFree(Object* obj);
int allocate();
Object*& get(int addr);
};
The public API for the MemStore is as follows:
- Object*& get(int address) is used to retrieve data stored at an address,
- void store(int address, Object* object) is used to write data to a specific address.
- int storeAtNextFree(Object*) is for saving an object for the first time and handles the requesting of memory addresses through allocate(), and returns the address the object was stored at.
- int allocate(int numcells) - is used to allocate one or more cells.
- void free(int address) - is used to make a previously allocated cell available again, by placing it on the freed list.
I wasn't kidding when I said this was a lightweight abstraction. Because we are using an array as the underlying data structure all of the above operations can be performed in constant time, which is why there is virtually zero overhead to this abstraction. Get and store are exactly what you would expect: simple getter/setters that perform an array access after validating the supplied address. The free() method is equally simple, serving to simply add the address being freed to the freed list.
Object*& MemStore::get(int addr) {
if (addr > 0 && addr < MAX_MEM_STORE)
return memstore[addr];
cout<<"Error: invalid memory address: 0x"<<addr<<endl;
return memstore[0];
}
void MemStore::store(int addr, Object* obj) {
if (addr < 1 || addr >= MAX_MEM_STORE) {
cout<<"Error: Invalid memory address: 0x"<<addr<<endl;
return;
}
memstore[addr] = obj;
}
void MemStore::free(int addr) {
if (addr > 0 && addr < MAX_MEM_STORE)
freedCells[freedCount++] = addr;
}
The allocate() method is really what drives the freedList, allocating from the memstore when the freedlist is empty and from the freedlist when address's are present. If the freed list is empty and all addresses have been allocated, you can either choose to allocate a larger size memstore and copy everything over, or just spit out an "out of memory" error, as i do here for brevity.
int MemStore::allocate() {
if (nextFreeAddress+1 == MAX_MEM_STORE) {
if (freedCount == 0) {
cout<<"Error: out of memory!"<<endl;
return 0; //out nilObject
} else {
return freedCells[--freedCount];
}
}
return ++nextFreeAddress;
}
int MemStore::storeAtNextFree(Object* obj) {
int nextFree = allocate();
store(nextFree, obj);
return nextFree;
}
Using the memstore is equally as straight forward:
#include <iostream>
#include "memstor.hpp"
using std::cout;
using std::endl;
int main() {
MemStore ms;
int saved[5];
saved[0] = ms.storeAtNextFree(makeIntObject(42));
saved[1] = ms.storeAtNextFree(makeStringObject(new string("the answer")));
saved[2] = ms.storeAtNextFree(makeBoolObject(true));
saved[3] = ms.storeAtNextFree(maleRealObject(420.0));
for (int i = 0; i < 4; i++) {
cout<<toString(ms.get(saved[i]))<<endl;
}
}
And That's all there is to it. Obviously, this is meant as starting point, you'll still want to clean up memory after it gets freed if you're not implementing a garbage collector.
-
How Many Hanukkah Candles?
-
Pratt Parsing: Top-Down Operator Precedence
-
Generating P-Code by AST Traversal
-
Ternary Search Tries: String Specific Ordered Symbol Tables
-
Digital Search Trees
-
Lossless Compression Part III: Huffman Coding
-
Lossless Compression Part II: The LZ77 Algorithm
-
Lossless Compression Part I: Working with Bits in a Byte Oriented World
-
Bottom Up AVL Tree: The OG Self-Balancing Binary Search Tree
-
A Different Take on Merge Sort: Binary Search Trees?
Leave A Comment