Construct a frontend to translate between Java objects and relational database records
One of the great strengths of Java is its ability to automate tedious programming tasks. For example, in the realm of I/O, object serialization automates the encoding of an arbitrary object as a stream of bytes. It could be done manually, but serialization makes it much more accessible and effective. Similarly, in the realm of networking, RMI (Remote Method Invocation) automates network requests between objects. That, again, could be done manually, but RMI makes it more accessible and effective. Finally, in the world of XML, work is under way to automate the generation of Java maps of XML document types, which will provide automated facilities for encoding and decoding XML documents. Again, much better than doing it manually.
TEXTBOX:
TEXTBOX_HEAD: Build an object database: Read the whole series!
- Part 1: Construct a frontend to translate between Java objects and relational database records
- Part 2: Implement relational database storage for Java objects
:END_TEXTBOX
One notable gap in this capability is the realm of databases. In this article, we will present our first go at overcoming this limitation by providing automated transformations between Java objects and records in a database.
(Some credit for this article must be extended to JavaWorld reader Alasdair Gilmour, who graciously pointed us in the direction of accessing private object fields.)
Note: The article’s full source code can be downloaded from Resources.
Terminology
Let’s first get some terminology down. You see, I’m a firm believer in flat files, binary editors, and linear searches. There’s really nothing you can’t do with them. Since I’m now venturing into the scary world of databases, I want to make sure you know what I mean by ftang when I say ftang.
A database is a collection of tables. A table is a collection of records. A record is a collection of fields. Fields are the basic units of information in a database.
So, a database is a big bad backend system containing all of your information. Remember, information is power. A table is a specific subset of information of a particular type: for example, a table of all the employees in your company. A record is a single row within that table: for example, all information about a particular employee. Finally, a field is a particular datum within that record: for example, an employee’s expendability rating.
In this article, we want to map between database records and Java objects. That means we want to be able to automatically translate between a Java User
object and a particular record within, for example, an employees table in a database. Consequently, the name
variable of the Java object will be mapped to the name
field of the database record, and vice versa.
To accomplish this, we could simply serialize the Java object and throw the resulting binary blob into the database. But that’s no fun. It does not lend itself to convenient access from anything but a Java program. In particular, we lose interoperability and we lose human accessibility. So we’re not going there.
Architecture
Let’s look at the 33,000-foot view of storing a Java object in a database:
-
Vivisect a Java object to determine all its fields and their values
- Store those fields and values into the backend database
Dropping rapidly to sea level, we have a couple of options for vivisecting the object: reflection allows us to programmatically query the object’s public fields, and serialization allows us to automatically encode all the fields of the object. Alternatively, a public interface would allow the object to decide on its own encoding. Similarly, we have a variety of options for storing the fields in the backend database: JDBC (Java Database Connectivity), flat files, maps, and so on.
Conveniently, the logical separation between vivisection and cold storage provides a convenient break between this article (mine) and the next (Michael Shoffner’s). I’ll be your vivisectionist on this little tour; Mike, your mortician. I mean, database consultant. The separation also provides a convenient variety of runtime configuration options.
Having briefly outlined the broad division of labor, I’m now going to nail down two interfaces that describe the processes:
-
ObjectStorer
. This interface describes the frontend object database process: scattering a Java object into fields within backend storage, and gathering those fields back into a Java object. ObjectStorage
. This interface describes the backend object database process: storing object fields in a database and retrieving them from the database.
To actually make use of the system, you must connect an ObjectStorer
with an ObjectStorage
. So, we next need to nail down the datastructures that will be communicated between these interfaces:
-
StorageFields
. This class encapsulates information about the object being stored, including the names, types, and values of all its fields. AnObjectStorer
will hand the information to anObjectStorage
for placement in the database. RetrievalFields
. This class encapsulates information about the record that was retrieved from the database, including the names and values of all fields. The difference fromStorageFields
is that the backend need not maintain type information about the fields; that is automatically determined during the retrieval process.
This is about as far as I can go without getting my feet wet, so I guess it’s about time to code …
Implementation
I start with the code for the architectural classes. After this, I’ll go through a reflection-based object storer, a serialization-based object storer, and finally a map-based object storage so that we can check that things are working out.
Interface ObjectStorer
The ObjectStorer
interface leads to the frontend of our object storage system:
import java.io.*;
public interface ObjectStorer {
public void put (Object key, Object object) throws IOException;
public Object get (Object key) throws IOException,
ClassNotFoundException, IllegalAccessException, InstantiationException;
}
The API is quite simple: You insert an object into storage with the put()
method. Along with an object, you specify a key with which it will be stored in the backend. The meaning of this key is backend specific. For a database, it might identify a particular record within a table. For a flat file, it might identify the index number. Any existing object is removed. If you put null
into storage, then the record is emptied.
Similarly, you retrieve an object from storage with the get()
method, specifying the key under which the object was stored. Again, the key is backend specific. The result has type Object
; you must cast it to the particular type that you expect. The type returned depends on the backend and what was originally stored there. If no record is found, then null
is returned.
Various exceptions may be raised, depending on whether there is an I/O error communicating with storage, or if there is a problem constructing a retrieved type.
Interface ObjectStorage
The ObjectStorage
interface leads to the backend of our object storage system:
import java.io.*;
public interface ObjectStorage {
public void put (Object key, StorageFields object) throws IOException;
public RetrievalFields get (Object key) throws IOException;
}
The API is again quite simple: An object, represented as a StorageFields
datastructure containing all the object’s fields, is placed into backend storage under the user-specified key. If object
is null
, then the key should be removed from the database.
Similarly, retrieval from the database involves extracting an object of type RetrievalFields
corresponding to the user-specified key.
Class StorageFields
The StorageFields
class represents an object’s fields that are to be placed in backend storage. Rather than bore you with details, I’ll just provide a skeleton of the API to this class. Internally, it’s a few maps and lists. (See Resources for the complete source code.)
import java.util.*;
public class StorageFields {
public StorageFields (String className) ...
public void addField (String field, Class type, Object value) ...
public String getClassName () ...
public Iterator getFieldNames () ...
public Class getType (Object field) ...
public Object getValue (Object field) ...
}
The constructor for this class accepts the class name of the object it is representing (for example, org.merlin.Employee
). The addField()
method then allows object fields to be added. Each field is fully specified as a name (“value,” for example), type (represented by the appropriate Class
object), and value. Primitive values are wrapped in the appropriate holder (for example, java.lang.Integer
).
To query this class, getClassName()
returns the name of the represented class and getFieldNames()
returns an iteration of the field names. For each field name, getType()
returns the corresponding type, and getValue()
the corresponding value.
Now, there are a few caveats involved in storing an object from this representation. First is the issue of the class name. To recreate an object from its fields, we must (at the very least — I’ll discuss this more along with serialization) know its class name. This information should, therefore, typically be stored along with the fields. However, it may be that this information is implicitly obvious. For example, a particular table may only ever hold org.merlin.Employee
objects in bondage, in which case the information need not be stored along with each record; it can instead be retrieved based on the implicit context.
Next we have the issue of duplicate field names. If a superclass and subclass happen to declare the same particular field, then we would end up with a clash of names in the fields to be stored. To overcome this, the storer must explicitly rename duplicate field names. For example, the subclass field might be named value
and the superclass field renamed value'
. The storer can then reverse the process during retrieval.
Class RetrievalFields
The RetrievalFields
class represents an object’s fields that have been retrieved from backend storage. Again, rather than bore you with details, I’ll simply provide a skeleton of the API to this class. Internally, it’s just a few maps, lists, and a bit of reflection for good measure. The full code is supplied in the Resources section at the end of this article.
import java.util.*;
import java.lang.reflect.*;
public class RetrievalFields {
public RetrievalFields (String className) ...
public void addField (String field, Object value) ...
public String getClassName () ...
public Iterator getFieldNames () ...
public Object getValue (Object field, Class type) ...
}
The constructor for the class accepts the class name of the object it is representing (for example, org.merlin.Employee
). It should be retrieved either from backend storage or else from context (for example, which table was accessed). The addField()
method then allows retrieved object fields to be added. Each field is specified as a name (“value,” for example) and value. Ordinarily, the value is encoded as expected (using the appropriate type holder for primitive types). However, to support storage systems that do not maintain type information, the value can also be expressed as a string, which will be decoded as appropriate during retrieval. Other value-encoding mechanisms can be supported by a storage-specific subclass.
To query this class, the getClassName()
method returns the name of the represented class and the getFieldNames()
method returns an iteration of the field names. For each field name, the getValue()
method returns the corresponding value as an instance of the specified class (or, for primitive classes, of the appropriate holder).
Class GeneralStorer
The GeneralStorer
class, a convenience implementation of ObjectStorer
, handles some basic services on behalf of a full implementation:
import java.io.*;
import java.util.*;
public abstract class GeneralStorer implements ObjectStorer {
protected GeneralStorer (ObjectStorage storage) ...
public void put (Object key, Object object) throws IOException ...
protected abstract StorageFields getFields (Object object) throws IOException;
public Object get (Object key) throws IOException,
ClassNotFoundException, IllegalAccessException, InstantiationException ...
protected abstract Object setFields (RetrievalFields object) throws IOException,
ClassNotFoundException, IllegalAccessException, InstantiationException;
}
A subclass should pass the appropriate ObjectStorage
object into the constructor of this class. The class then implements the put()
method to call on the subclass getFields()
method and then to store the result in backend storage. Similarly, it implements the get()
method to retrieve fields from backend storage, which are then passed to the subclass setFields()
method for object reconstruction.
Reflection-based object storage
We’ll now look at perhaps the most obvious solution to the object-storage problem. We will use the Java reflection API to introspect the fields of an object to be stored in a database and then restore those fields.
Class ReflectionStorer
The ReflectionStorer
class, an ObjectStorer
, uses reflection to divine the public fields of an object for storage and to restore them after retrieval:
import java.io.*;
import java.lang.reflect.*;
import java.util.*;
public class ReflectionStorer extends GeneralStorer {
We extend GeneralStorer
to avail ourselves of the general support provided by that class.
In the constructor, we accept an ObjectStorage
and simply pass it on to the superclass:
public ReflectionStorer (ObjectStorage storage) {
super (storage);
}
Next, the getFields()
method forms a StorageFields
representation of the object to be stored. We extract the class name from the object and then iterate through the classes and superclasses of the object, identifying all the fields to be stored. We use the suffixes
HashMap
to correctly handle duplicate field names:
protected StorageFields getFields (Object object) {
Class clazz = object.getClass ();
String className = clazz.getName ();
StorageFields fields = new StorageFields (className);
try {
Map suffixes = new HashMap ();
do {
getFields (fields, object, clazz, suffixes);
clazz = clazz.getSuperclass ();
} while (clazz != null);
} catch (IllegalAccessException ignored) {
}
return fields;
}
Below, the getFields()
method gets all the fields defined in a particular class along the implementation hierarchy of the object to be stored. We use reflection to introspect all the fields declared by the class. Then we iterate through these fields, identifying all those that are valid (nonstatic, nontransient, and nonfinal). We append each field, its type, and its value to the StorageFields
we are creating and then update the suffix map:
private void getFields
(StorageFields fields, Object object, Class clazz, Map suffixes)
throws IllegalAccessException {
Field[] classFields = clazz.getDeclaredFields ();
AccessibleObject.setAccessible (classFields, true);
int n = classFields.length;
for (int i = 0; i < n; ++ i) {
Field field = classFields[i];
if (isValid (field)) {
String name = field.getName ();
Class type = field.getType ();
Object value = field.get (object);
StringBuffer suffix = (StringBuffer) suffixes.get (name);
if (suffix == null)
suffixes.put (name, suffix = new StringBuffer ());
fields.addField (name + suffix, type, value);
suffix.append (''');
}
}
}
Note that we use the AccessibleObject
class method setAccessible()
to enable access to all the declared fields that we have retrieved, including those that are private, protected, and so on. Ordinarily, Java’s rules of access control would prevent us from accessing nonpublic fields. However, the AccessibleObject
class (the superclass of Field
, Method
, and Constructor
) provides setAccessible()
methods that enable trusted code (or semitrusted code with the ReflectionPermission
“suppressAccessChecks
“) to access nonpublic fields and methods.
Next, the setFields()
method forms a new object from its RetrievalFields
representation. We extract the class name and use the Class.forName()
method to create a new instance of the class, using its public no-argument constructor. Next, we iterate through the classes and superclasses of the object, restoring all stored fields. Again, we use the suffixes
HashMap
to correctly handle duplicate field names:
protected Object setFields (RetrievalFields fields)
throws ClassNotFoundException, IllegalAccessException, InstantiationException {
String className = fields.getClassName ();
Class clazz = Class.forName (className);
Object object = clazz.newInstance ();
Map suffixes = new HashMap ();
do {
setFields (object, fields, clazz, suffixes);
clazz = clazz.getSuperclass ();
} while (clazz != null);
return object;
}
The next method, setFields()
, restores all the fields defined in a particular class along the implementation hierarchy of the object being restored. We use reflection to introspect all the fields declared by the class. Then we iterate through these fields, identifying all those that are valid. For each such field, we retrieve the stored value from the RetrievalFields
object (specifying the expected type) and insert the resulting value into the restored object:
private void setFields
(Object object, RetrievalFields fields, Class clazz, Map suffixes)
throws IllegalAccessException {
Field[] classFields = clazz.getDeclaredFields ();
AccessibleObject.setAccessible (classFields, true);
int n = classFields.length;
for (int i = 0; i < n; ++ i) {
Field field = classFields[i];
if (isValid (field)) {
String name = field.getName ();
Class type = field.getType ();
StringBuffer suffix = (StringBuffer) suffixes.get (name);
if (suffix == null)
suffixes.put (name, suffix = new StringBuffer ());
Object value = fields.getValue (name + suffix, type);
field.set (object, value);
suffix.append (''');
}
}
}
The convenience method isValid()
lets us know whether a particular field is valid (nonstatic, nontransient, and nonfinal):
private boolean isValid (Field field) {
int modifiers = field.getModifiers ();
return (!Modifier.isTransient (modifiers) &&
!Modifier.isStatic (modifiers) &&
!Modifier.isFinal (modifiers));
}
}
Had we not used AccessibleObject
to enable access to nonpublic fields, we would also need to check whether the field was public.
Discussion
The ReflectionStorer
object storer appears to be relatively capable; however, it has a few limitations: First, nonpublic fields are accessible only to trusted applications or signed applets; the code cannot be used from completely untrusted code. Second, all stored classes must provide a public no-argument constructor. Third, after restoring fields to an object, we do not inform it that it has been reconstructed; that means it can perform no maintenance.
An example of a suitable class for this object storer is the following:
public class Employee {
private String name;
private int age;
private boolean expendable;
public Employee (String name, int age, boolean expendable) {
...
}
public Employee () {
}
}
In fact, most of the reflection problems that I just mentioned are easily overcome: we can simply use AccessibleObject
to enable access to a private no-argument constructor and to enable access to private post-reconstruction maintenance methods.
Overall, reflection solves our problem in a reasonably pleasant and efficient manner, subject to the limitation that code must be trusted.
Serialization-based object storage
The problem we wish to overcome is, in fact, remarkably similar to the problem overcome by the object streams. They can introspect the public and private fields of any object that declares permission and can reconstruct objects, with or without no-argument constructors. Now, there are caveats and wherewithals associated with the use of the object streams, but I’ll not go into those here; I will simply state the limitations of my solution at the end of this section.
Given the similarity in goals of our two problems, and the successful implementation of the object streams, it seems to make sense to leverage the object serialization framework. With the advent of the Java 2 Platform came a tantalizing new feature in the object streams: the ability to override the standard serialization implementation. Perhaps, by overriding the default implementation, one can scatter the fields of an object into a database.
Sadly, in practice it appears (and I am open to correction) that the object streams’ overriding capability is of no use. No facility is provided to gain access to the nonpublic fields of an object being serialized, and no facility is provided to access the private methods and fields associated with custom serialization. As a result, one cannot, it appears, directly and easily solve our problem with the object streams.
Of course, there are usually ugly solutions to messy computing problems. This problem is no exception. Barring the elegant solution of overriding the object streams, we must use the ugly, awkward, and inefficient — but completely functional — solution that I will now describe:
To extract the fields of an object, construct an ObjectOutputStream
that writes into a memory buffer. Serialize the object of interest. Then, dissect the memory buffer to determine the fields and data of the object. The format of the serialization protocol is publicly available in the Java documentation (and, more important, in the source), so this is relatively straightforward. Painful but relatively straightforward. To reconstruct an object, take the restored fields and reconstruct a serialized byte stream in a memory buffer. Attach an ObjectInputStream
to this and, hey, presto!
Aside: The fact that we can use the object streams to discern the private internals of any object — awkwardly or not — proves that adding simple support for this process to the Java API would not be a security flaw. Maybe I’m missing something, but it seems to me that a few small additions to the overriding API would make life much easier. In fact, the kind folks at JavaSoft inform me that a future release may indeed have them.
Anyway, on with the code …
Class SerializationStorer
This class, SerializationStorer
, is an ObjectStorer
that uses the object streams, and a dissection of the serialization protocol, to divine all of the public and private fields of an object for storage and to restore them after retrieval:
import java.io.*;
import java.util.*;
public class SerializationStorer extends GeneralStorer
implements ObjectStreamConstants {
...
}
I’ll save you from the boring and tedious code. Suffice it to say that I follow the directions above, serializing and dissecting to store objects, and reconstructing and deserializing to restore objects. For clarity, I separate those two processes into inner classes Storer
and Retriever
within the implementation class. The big-picture mechanisms are the same as the more transparent reflection storer, which I’ve already detailed.
Much more interesting is the later discussion on the limitations of this class.
Discussion
SerializationStorer
is a fully capable object storage class. It can divine the private and public fields of serialized objects, and it can safely reconstruct objects even if they don’t provide no-argument constructors. Also, because it avails itself of the serialization process, objects can implement readObject()
methods to perform post-deserialization cleanup and can register validators.
Let’s look at an example of a suitable class for this object storer:
public class Employee implements Serializable {
private String name;
private int age;
private boolean expendable;
public Employee (String name, int age, boolean expendable) {
...
}
...
}
Unlike the previous example, we don’t need to provide an empty constructor and our code does not need to be trusted. All we need to do is implement Serializable
to indicate that we may be serialized and follow the basic rules of serializability. Overall, this allows us to employ better and more proper class definitions within our object system in a wider range of applications.
Limitations
OK, so it’s all singing and all dancing. Or is it? Does it just scream and shake? Well, it’s in between. I’ve imposed the following limitations on classes that can be stored:
-
Externalization
Classes that implement
Externalizable
are not supported. The externalization protocol is a class-specific binary encoding option that is anathema to meaningful dissection. -
Custom serialization
Classes that implement the
writeObject()
method are not supported. Again, this method allows a class to add proprietary binary-encoded data to a serialized stream. Now, actually, with a little bit of work we could support it. If a class were willing to provide name and type information with the data that it wrote, then we could usefully store that information in a database. We could even store the binary data directly in the database without any name or type information. But that’s another episode. -
Reference fields
Serialized fields may only be of primitive or string types. No arrays or other reference types are supported. This is the most interesting limitation. Arrays of primitives could be supported with relative ease; I just chose not to go to that effort. However, other reference types pose an intriguing problem.
Consider adding a
boss
field of typeEmployee
to theEmployee
class. When I go to store anEmployee
, what exactly do I store for theboss
? One option would be to store the fields of this referenced object within theEmployee
‘s record. So we would wind up with fields such asboss.age
. This, however, is evil. It violates the requirement of our database that data not be duplicated (the boss will also have an entry). Also, what if an employee is his or her own boss?The obvious answer is to store a reference to the
boss
record in the database. But what is a reference in a database? Well, one requirement of database design is that every table have a primary key. This is a field (or a group of fields, in the case of a composite key) that uniquely identifies each record within the table. For example,Employee ID
might be the primary key of an employees table. We can guarantee that no two employees have the same ID (unlike names), so including the boss’s employee ID in theboss
field is a unique reference within the database. Thus, all we need to do is provide some mechanism for mapping from Java object references to database keys.So, when I go to store an
Employee
in the database, what do I do if theboss
is not already there? One option is to recursively store all referenced objects in the database, just as the object streams serialize all referenced objects within a reference tree.Employee
andboss
will both be stored. What, then, if I manipulate theEmployee
and store it again? The object streams can’t know that you’ve changed an object, so they don’t store the object again; they simply store a back reference to the original version (unless you completely reset the stream). What should I do? Well, the discussion is moot for now. The current implementation doesn’t support references. But I hope you see some of the issues. And, in a later article, we’ll look at this question more fully. -
Versioning
Finally, there’s versioning. When an object is serialized, the object streams store full class descriptor information that allows the recipient to deserialize the object, even if they have a different version of the class. For example, if you serialize an
Employee
and then add a field to theEmployee
class, deserialization can still occur correctly.This goes back to the old class name issue I mentioned earlier. In this object storage framework, the only meta-information I require is the name of the class that corresponds to a database record. If we wish to support versioning, then, in place of just a class name, we have to store full class-descriptor information about each stored object. Storing such a binary blob appears ugly to me, so I just won’t support it. That’s not to say that you can’t change your class definitions after storing objects, just that there are limitations on what changes you can make. And I’m not, for now, going to tell you what those limitations are. Instead, I’ll let you examine my code and find out for yourself.
Map-based storage
This article has been solely concerned with the frontend process of object storage: vivisecting objects into fields that can be placed in backend storage. However, for us to see if it works at all, a simple storage implementation is of use. Let’s take a look at it in action.
Class MapStorage
The MapStorage
class implements the ObjectStorage
interface, simply storing object fields in a Map
:
public class MapStorage implements ObjectStorage {
private Map storageMap;
public MapStorage (Map storageMap) {
this.storageMap = storageMap;
}
The constructor in the code above accepts a Map
into which all objects will be stored.
Below, the put()
method accepts a StorageFields
object and inserts it into the map under the specified key. To encode the object for storage in the primary map, we simply create a submap containing the fields and values of the object being stored, along with the class name stored under the key "@class"
. Our little hack to encode null
fields (it’s only possible for strings, and it is not supported by maps) is to store a reference to the submap itself:
public void put (Object key, StorageFields object) {
if (object == null) {
storageMap.remove (key);
return;
}
Map objectMap = new HashMap ();
String className = object.getClassName ();
objectMap.put (classNameKey, className);
Iterator fields = object.getFieldNames ();
while (fields.hasNext ()) {
Object field = fields.next ();
Object value = object.getValue (field);
if (value != null)
objectMap.put (field, value);
else
objectMap.put (field, objectMap);
}
storageMap.put (key, objectMap);
}
private static String classNameKey = "@class";
The get()
method, detailed below, looks up a submap in the primary map, and then recreates a RetrievalFields
object from the contents of the submap:
public RetrievalFields get (Object key) {
Map objectMap = (Map) storageMap.get (key);
if (objectMap == null)
return null;
String className = (String) objectMap.get (classNameKey);
RetrievalFields object = new RetrievalFields (className);
Iterator fields = objectMap.keySet ().iterator ();
while (fields.hasNext ()) {
String field = (String) fields.next ();
Object value = objectMap.get (field);
if (value == objectMap)
value = null;
object.addField (field, value);
}
return object;
}
}
To sum up this MapStorage
class, our map-based storage results in a map of maps, each submap corresponding to the class name and fields of an object that has been written to storage.
In practice
With our serialization-based storer and map-based storage, we are now in a position to test the system. Consider the following code:
import java.io.*;
import java.util.*;
public class StorageTest {
public static void main (String[] args) throws Exception {
Map storageMap = new HashMap ();
ObjectStorage storage = new MapStorage (storageMap);
ObjectStorer storer = new SerializationStorer (storage);
storer.put ("Helms", new RepublicanSenator ("Helms", 16000000000L));
storer.put ("Clinton", new Senator ("Clinton", 52, true));
storer.put ("one", new Integer (1));
System.out.println (storer.get ("Helms"));
System.out.println (storer.get ("Clinton"));
System.out.println (storer.get ("one"));
System.out.println (storageMap);
}
}
class Senator implements Serializable {
protected String name;
protected int age;
protected boolean expendable;
public Senator (String name, int age, boolean expendable) {
this.name = name;
this.age = age;
this.expendable = expendable;
}
public String toString () {
return "Senator[name=" + name + ",age=" + age +
",expendable=" + expendable + "]";
}
}
class RepublicanSenator extends Senator {
protected long age;
public RepublicanSenator (String name, long age) {
super (name, -1, true);
this.age = age;
}
public String toString () {
return "RepublicanSenator[name=" + name + ",age=" + age +
",expendable=" + expendable + "]";
}
}
Yeah yeah, politicks shmoliticks. No similarity to individuals is intended, blah blah. This is just an example.
What we have here is an example of using a SerializationStorer
attached to a MapStorage
. We have a serializable Senator
class with nonpublic fields, and a subclass RepublicanSenator
that declares a duplicate age
field. We place a few objects in storage (including, for reference, a java.lang.Integer
). Then we take them back out of storage and print them out. Finally, we print out the storage map, just so you can see the resulting map of maps. The output is as follows:
RepublicanSenator[name=Helms,age=16000000000,expendable=true]
Senator[name=Clinton,age=52,expendable=true]
1
{Helms={@class=RepublicanSenator, age'=-1,
age=16000000000, expendable=true, name=Helms},
one={value=1, @class=java.lang.Integer},
Clinton={@class=Senator, age=52, expendable=true, name=Clinton}}
Notice how the RepublicanSenator
has both an age
field and an age'
field (from the superclass), and that everything is retrieved successfully.
Wonderful. Next time around, Mike will show you how to really put a RepublicanSenator
in cold storage.
Conclusion
When I set out to write this, I thought it would be cool. I could use an overridden object stream to scatter objects into their constituent fields and then gather these fields back together into objects. Add a database and we have something quite exciting.
Then I determined that that would not work. So I went with reflection and came up with a reasonable solution subject to a few limitations.
Thankfully, Java makes it extremely simple to implement protocols, such as the object serialization protocol, so we’re back in the cool stakes with something that’s incredibly useful.
This framework can automatically translate between the Java in-core representation of an object, and a cold, hard — but human readable — database record. It eliminates the need to write JDBC code that manually examines objects and stores their fields in a database and then manually recreates objects from database fields. Instead, we can stay at a high, practical object level. All it takes is some object design, along with your database design, to make sure that everything lines up.
Next time around, Mike will run through how to interface our object storage mechanism with a relational database, as well as how to use it. After that, I’ll return with some work on storing object references and adding some customization features, perhaps with a bit of explanation on the workings of object serialization.
I hope this series gives you optimism about reducing the drudgery of working with databases. In the meantime, long live flat files!