Automate dependency tracking, Part 3

Create a rich, interactive experience for your user

In the past, manual notification mechanisms were the de facto standards for updating dependent data in an object-oriented application. Patterns such as Observer, Publish-Subscribe, Document-View, and Model-View-Controller were certainly important as graphical user interfaces (GUIs) matured. But now that interactive applications have grown more sophisticated, expressive, and complex, the chore of notification now burdens productivity.

In this series of articles, I have introduced a mechanism that removes manual labor from dependency tracking. This mechanism automatically discovers dependencies within the application and ensures that dependent information is always up to date.

Read the whole series on automatic dependency tracking:

  • Part 1: Design an information model for automatically discovering dependencies in interactive object-oriented applications
  • Part 2: Automatic dependency tracking discovers dependencies at runtime and updates the user interface
  • Part 3: Create a rich, interactive experience for your user

Along the way, I developed Nebula, a network design application that demonstrates a typical GUI application’s many key features. It models a problem in such a way that the user can build, traverse, and apply the solution. And it presents a graphical user interface that allows simple repositioning and component editing. However, Nebula is not yet complete; it lacks true user interaction.

An interactive application provides feedback for most user actions. The feedback tells the user what the program expects and foreshadows events. It allows the user to combine several small actions to achieve a larger result. And it lets the user see what is going on inside the model itself.

To achieve a rich user experience, you must add visual feedback for such actions as selection, dragging, placing new devices, and running simulations. Just as automatic dependency tracking helps simplify the construction of the UI itself, it also helps you add rich user interaction. You simply identify the system’s dependent and dynamic state.

Recall that an event changes dynamic state, whereas dependent state simply reacts to other state. Once you identify dynamic and dependent attributes, you use sentries to express them in code (refer to the sidebar, “Coding Practices for Automatic Dependency Tracking” for more information). In the previous two installments, we used sentries to capture the dependency of the user interface (UI) on the information model (IM). In this installment, we implement the following four dependency cases:

  1. A UI component’s drag behavior depends upon the mouse position and state
  2. Selection is a dynamic attribute of each UI component
  3. The entire UI depends upon a macro operation’s current mode
  4. Calculation results depend upon inputs

Drag behavior depends upon mouse behavior

First, we tackle how to add visual feedback as the user repositions a device. We want to pick a device when the user presses the mouse button, track the mouse position while the button is pressed, and repaint the device as the mouse moves. We have already written the code that moves the device when the user releases the mouse; now we just need to provide some feedback for the drag.

To do that, let’s identify the dynamic and dependent attributes. The user can directly change the mouse’s position and the button’s status; these attributes are dynamic. As the device is dragged, its image reacts to the mouse’s position and the button’s status; the image is dependent. So the drag image depends upon the mouse’s state.

We need to store mouse position and button state as dynamic data in the graphic component, thus we add data members and dynamic sentries. We also add access methods to IGraphicItemContainer, the interface by which all graphic items know their container. The new GraphicComponent methods and members appear below:

    private GraphicItem getMouseItem()
    {
        m_dynMouseItem.onGet();
        return m_pMouseItem;
    }
    private void setMouseItem( GraphicItem item )
    {
        if ( m_pMouseItem != item )
        {
            m_dynMouseItem.onSet();
            m_pMouseItem = item;
        }
    }
    private Point getFrom()
    {
        m_dynFrom.onGet();
        return new Point( m_ptFrom );
    }
    private void setFrom( Point ptFrom )
    {
        if ( !m_ptFrom.equals(ptFrom) )
        {
            m_dynFrom.onSet();
            m_ptFrom.setLocation( ptFrom );
        }
    }
    private Point getTo()
    {
        m_dynTo.onGet();
        return new Point( m_ptTo );
    }
    private void setTo( Point ptTo )
    {
        if ( !m_ptTo.equals(ptTo) )
        {
            m_dynTo.onSet();
            m_ptTo.setLocation( ptTo );
        }
    }
    private boolean isDragging()
    {
        m_dynDragging.onGet();
        return m_bDragging;
    }
    private void setDragging( boolean bDragging )
    {
        if ( m_bDragging != bDragging )
        {
            m_dynDragging.onSet();
            m_bDragging = bDragging;
        }
    }
    // Dynamic data.
    private GraphicItem m_pMouseItem = null;
    private Point m_ptFrom = new Point();
    private Point m_ptTo = new Point();
    private boolean m_bDragging = false;
    // Dynamic sentries.
    private Dynamic m_dynMouseItem = new Dynamic();
    private Dynamic m_dynFrom = new Dynamic();
    private Dynamic m_dynTo = new Dynamic();
    private Dynamic m_dynDragging = new Dynamic();
    // Implementation of the graphic item container interface.
    IGraphicItemContainer container = new IGraphicItemContainer()
    {
        ...
        public boolean isMouseDragging( GraphicItem item )
        {
            return isDragging() && getMouseItem() == item;
        }
        public boolean isMousePressing( GraphicItem item )
        {
            return !isDragging() && getMouseItem() == item;
        }
        public Point getMouseFromPosition()
        {
            return getFrom();
        }
        public Point getMouseToPosition()
        {
            return getTo();
        }
    };

The above code follows the pattern for dynamic attributes: For each dynamic attribute, we define a data member, a dynamic sentry, an access method, and a mutate method. The access method calls the dynamic sentry’s onGet(), and the mutate method calls the dynamic sentry’s onSet(). This simple pattern prepares the automatic dependency tracking system for discovering dependencies upon these dynamic attributes.

Now that we’ve completed the dynamic side of the equation, we next create the dependent attributes for drag feedback. In Part 2, we defined a dependent draw glyph and draw offset for each graphic item. Now we add a drag glyph and drag offset. The draw image remains fixed while the drag image moves under the mouse. Each image comprises two dependent attributes — a Glyph and a Point — each with a dependent sentry. A third dependent sentry ties the Glyph and Point together and tells the component to repaint. The attributes and sentries are defined in the GraphicItem class as follows:

    protected Glyph onUpdateDragGlyph()
    {
        return new Glyph();
    }
    protected Point onUpdateDragOffset()
    {
        return new Point();
    }
    ...
    // Glyph and offset of drag image.
    private Glyph m_dragGlyph = null;
    private Point m_dragOffset = new Point();
    ...
    private Dependent m_depDragGlyph = new Dependent( new IUpdate()
    {
        public void onUpdate()
        {
            if ( m_container.isMouseDragging(getThis()) )
                m_dragGlyph = onUpdateDragGlyph();
            else
                m_dragGlyph = null;
        }
    });
    private Dependent m_depDragOffset = new Dependent( new IUpdate()
    {
        public void onUpdate()
        {
            m_dragOffset = onUpdateDragOffset();
        }
    });
    ...
    // Dependent sentry to update the drag image.
    private Dependent m_depDragImage = new Dependent( new IUpdate()
    {
        public void onUpdate()
        {
            // Invalidate the old image.
            if ( m_dragGlyph != null )
                m_dragGlyph.invalidate( m_container, m_dragOffset );
            // Update the image.
            m_depDragGlyph.onGet();
            if ( m_dragGlyph != null )
                // We only care about the offset if we have a glyph.
                m_depDragOffset.onGet();
            // Invalidate the new image.
            if ( m_dragGlyph != null )
                m_dragGlyph.invalidate( m_container, m_dragOffset );
        }
    });

Also notice one small addition to the paint() method in the code below. Whereas before we painted only the draw glyph, we now also paint the drag glyph — if it exists:

    public void paint( Graphics g )
    {
        m_drawGlyph.paint( g, m_drawOffset );
        if ( m_dragGlyph != null )
            m_dragGlyph.paint( g, m_dragOffset );
    }

With both the dynamic and dependent attributes in place, the GraphicItem base class is poised to react to the mouse’s position. A derived class simply overrides the onUpdateDragGlyph() and onUpdateDragOffset() methods, and refers to the mouse’s position. The automatic dependency tracking framework takes care of the rest. Here’s how the GraphicHub class implements its drag image:

    public static Glyph standardGlyph( Glyph g, boolean bReached )
    {
        return g.
            rectangle( new Rectangle(-24, -6, 49, 13), Color.black,
                bReached ? Color.green : Color.white ).
            ellipse( new Rectangle(-15, -3, 7, 7), Color.black, null ).
            ellipse( new Rectangle(-3, -3, 7, 7), Color.black, null ).
            ellipse( new Rectangle(9, -3, 7, 7), Color.black, null );
    }
    ...
    protected Glyph onUpdateDragGlyph()
    {
        // Same as draw glyph.
        return standardGlyph( new Glyph(), false );
    }
    protected Point onUpdateDragOffset()
    {
        // Offset the location by the drag delta.
        Point ptOffset = m_locatedHub.getLocation();
        Point ptFrom = getMouseFromPosition();
        Point ptTo = getMouseToPosition();
        ptOffset.translate( ptTo.x - ptFrom.x, ptTo.y - ptFrom.y );
        return ptOffset;
    }

The code that creates a glyph moves into a static method so that the code can easily be called from more than one place. Ignore the bReached parameter for now; we will use it later to change the hub’s color according to certain conditions. You only require the above code changes to make the UI respond as the user drags a device.

Selection: A graphic item’s dynamic attribute

To implement dragging, we recognized that the mouse’s position and status are the only states under the user’s direct control; thus, the position and status are dynamic. A graphic item’s reaction completely depends upon the mouse’s dynamic attributes. We stored the mouse’s dynamic state in one place so that each graphical item depended upon the one mouse.

Selection, however, represents a dynamic attribute on each graphic item. The user can reach in and change any graphic item’s selection state; the item’s glyph should react accordingly. To implement selection, we add one attribute and one dynamic sentry to each graphic item. You will find the data member, dynamic sentry, and access and mutate methods in the GraphicItem class:

    public boolean getSelected()
    {
        m_dynSelected.onGet();
        return m_bSelected;
    }
    public void setSelected( boolean bSelected )
    {
        if ( m_bSelected != bSelected )
        {
            m_dynSelected.onSet();
            m_bSelected = bSelected;
        }
    }
    ...
    // Selection.
    private boolean m_bSelected = false;
    ...
    // Dynamic sentries.
    private Dynamic m_dynSelected = new Dynamic();

Each specific graphic item, such as GraphicHub, must react to selection by updating its glyph. We take advantage of the fact that the glyph is a dependent attribute and selection, a dynamic attribute. With the following code, the GraphicHub class obtains the desired behavior:

    protected Glyph onUpdateDrawGlyph()
    {
        Glyph g = new Glyph();
        if ( getSelected() )
        {
            // Draw selection behind the glyph.
            g.rectangle( new Rectangle(-27, -9, 56, 20), null, Color.blue );
        }
        return standardGlyph( g, isReached() );
    }

The onUpdateDrawGlyph() method obtains the item’s selection state, thus registering a dependency upon that state. The GraphicHub class, its base class GraphicItem, and its container the GraphicComponent all collaborate to set the selection state in response to mouse events, even implementing multiple selection through the Control-Click mechanism. Each device’s image reacts automatically as the user selects and unselects the items.

The UI mode is shared dynamic state

Both dragging and selection represent per-item behavior. Each individual graphic item determines its own dependent behavior. However, when we implement placement of a new device, we cannot encapsulate the desired behavior within each individual item; we must place the entire UI into a new mode.

To place a new device on the network UI, the user right-clicks the white space and selects a command from the context menu — New Computer for example. A computer image appears under the mouse; the user places that image at the desired location with a single click. The user interaction involves two separate gestures: menu selection and clicking. Each gesture represents a change in the state of the entire UI. The intermediate state is a UI mode global to the entire view.

The UI mode is a dynamic state machine: the machine transitions among states based on user events. The reaction of the UI components is dependent. The image that prompts the user to place a new computer depends on both the state and mouse position. Here again, we can take advantage of automatic dependency tracking by inserting the appropriate sentries.

We define a single class, NetworkComponentMode, to record the UI’s dynamic state. This class represents a simple state machine that tracks the user’s actions during a multiple-gesture interaction. Each gesture transitions the machine into a different state. At certain transitions, the state machine recognizes the user’s request, and modifies the model accordingly. Dynamic attributes make up the NetworkComponentMode class, a class that defines the Normal and New Computer states as follows:

public class NetworkComponentMode
{
    public NetworkComponentMode( LocatedNetwork locatedNetwork )
    {
        m_locatedNetwork = locatedNetwork;
    }
    public void reset()
    {
        // Go back to the "normal" state.
        if ( m_state != STATE_NORMAL )
        {
            m_dynState.onSet();
            m_state = STATE_NORMAL;
            m_from = null;
            m_start = null;
        }
    }
    public State getState()
    {
        m_dynState.onGet();
        return m_state;
    }
    ...
    public void startNewComputer()
    {
        // Go to the "new computer" state.
        m_dynState.onSet();
        m_state = STATE_NEW_COMPUTER;
    }
    public void endNewComputer( Point ptLocation )
    {
        // Create the new computer.
        Computer computer = m_locatedNetwork.getNetwork().createComputer();
        m_locatedNetwork.getLocatedDevice( computer ).setLocation( ptLocation );
        // Go back to the "normal" state.
        m_dynState.onSet();
        m_state = STATE_NORMAL;
    }
    ...
    // Definitive data.
    private LocatedNetwork m_locatedNetwork;
    // Which mode are we in.
    static public class State {}
    static public final State STATE_NORMAL = new State();
    static public final State STATE_NEW_COMPUTER = new State();
    ...
    private State m_state = STATE_NORMAL;
    private Dynamic m_dynState = new Dynamic();
    ...
}

The NetworkComponentMode class represents the current state with an enumeration, monitored appropriately by a dynamic sentry in access and mutate methods.

The UI reacts to the New Computer mode by placing an additional graphic item on the screen. Since the computer in question has not yet been added to the IM, the existing GraphicComputer class is not appropriate for these new graphic items; that class is specifically designed to represent an existing computer. Instead, we design the GraphicNewComputer class specifically for the intermediate New Computer mode. This item generates an image based on mode and mouse position, not on the information model. The GraphicNewComputer class appears below in its entirety:

public class GraphicNewComputer extends GraphicItem
{
    public GraphicNewComputer( NetworkComponentMode mode )
    {
        m_mode = mode;
    }
    public boolean matches(IRecyclableObject parm1)
    {
        // Don't recycle this object.
        return false;
    }
    public Glyph onUpdateDrawGlyph()
    {
        return GraphicComputer.standardGlyph( Computer.WORKSTATION, new Glyph(), false, false );
    }
    public Point onUpdateDrawOffset()
    {
        return getMouseFromPosition();
    }
    public void onLClick( Point pt, int nModifiers )
    {
        m_mode.endNewComputer( pt );
    }
    private NetworkComponentMode m_mode;
}

When the user clicks a GraphicNewComputer, the object reports the event to the shared NetworkComponentMode. When the mode transitions from STATE_NEW_COMPUTER back to STATE_NORMAL via the method endNewComputer(), a new computer is added to the network and immediately located at the desired point. Think about what has to happen when that method executes: A new Computer object is added to the Network, which trips the onSet() method of the Network‘s m_dynDevices dynamic sentry. That causes the LocatedNetwork‘s list of LocatedDevices to go out of date, which in turn causes the NetworkComponent‘s list of GraphicItems to go out of date. endNewComputer()‘s next line queries the LocatedNetwork for a specific LocatedDevice — the one that represents the new Computer — which forces the dependent sentry m_depDevices to update the LocatedDevice map. Within this update method, a LocatedComputer is created to represent the new Computer. This new LocatedComputer object returns to endNewComputer(), where its location is set. After the method exits, the NetworkComponent‘s list of GraphicItems is finally updated, at which time the new GraphicComputer is created and displayed.

Better yet, don’t think about what has to happen. You don’t have to; the automatic dependency tracking system ensures everything works properly.

Calculation results depend upon many attributes

So far, we have achieved dragging, selection, and new component placement — all impressive feats of user interaction. But now we turn our attention back to the reason why the application was built in the first place: the problem domain. The user wishes to evoke the information model’s predictive properties to solve an actual problem. In other words, after constructing a network, the user wants to route a packet. Here we employ automatic dependency tracking in its most pervasive utilization yet.

Recall from Part 1 that the Nebula information model allows the client to build, traverse, and apply a solution. In particular, the IM includes methods for routing a packet from a source computer to a target IP address. Along the way, the packet records all devices that it reaches. The user would like to pick a source computer, enter a target IP address, and immediately see all reached devices. The source computer and target IP address are dynamic; the list of reached devices is dependent.

We create a new class, NetworkCalculation, that holds the dynamic and dependent information. All components of a single view share one instance of the NetworkCalculation class. The user right-clicks a computer, selects the Route a packet… context-menu item, and enters an IP address. The GraphicComputer sets the source Computer and target IP in this one shared NetworkCalculation object.

The NetworkCalculation class records the dynamic source Computer and target IP, as well as the dependent reach list. Below is the entire class, complete with dynamic and dependent sentries:

public class NetworkCalculation
{
    public NetworkCalculation( Network network )
    {
        m_network = network;
    }
    public IP getTarget()
    {
        m_dynTarget.onGet();
        return new IP( m_ipTarget );
    }
    public void setTarget( IP ipTarget )
    {
        if ( !m_ipTarget.equals(ipTarget) )
        {
            m_dynTarget.onSet();
            m_ipTarget = new IP( ipTarget );
        }
    }
    public Computer getSource()
    {
        m_dynSource.onGet();
        return m_cSource;
    }
    public void setSource( Computer cSource )
    {
        if ( m_cSource != cSource )
        {
            m_dynSource.onSet();
            m_cSource = cSource;
        }
    }
    public Device.ConstantIterator getPacketReach()
    {
        m_depPacket.onGet();
        return m_packet.getReachIterator();
    }
    public IP.ConstantIterator getPacketTrace()
    {
        m_depPacket.onGet();
        return m_packet.getTraceIterator();
    }
    // Definitive data.
    private Network m_network;
    // Dynamic data.
    private IP m_ipTarget = new IP();
    private Computer m_cSource = null;
    // Dynamic sentries.
    private Dynamic m_dynTarget = new Dynamic();
    private Dynamic m_dynSource = new Dynamic();
    // Dependent data.
    private Packet m_packet;
    // Dependent sentries.
    private Dependent m_depPacket = new Dependent( new IUpdate()
    {
        public void onUpdate()
        {
            // Route a packet from the source to the target.
            m_packet = new Packet( getTarget() );
            Computer cSource = getSource();
            if ( cSource != null )
            {
                NIC.ConstantIterator itNICs = cSource.getNICIterator();
                while ( itNICs.hasNext() )
                {
                    itNICs.next().forwardPacket( m_packet );
                }
            }
        }
    } );
}

Now we can finally talk about the mysterious isReached() method that GraphicHub calls. isReached()‘s return value is used to color the glyph under certain conditions: when the routed packet reaches the hub. As you might have already guessed, isReached() is an access method for a dependent attribute that looks through the shared calculation object’s reach list to locate the current device. Here’s the code:

    private boolean isReached()
    {
        m_depReached.onGet();
        return m_bReached;
    }
    ...
    // Dependent data.
    private boolean m_bReached;
    // Dependent sentries.
    private Dependent m_depReached = new Dependent( new IUpdate()
    {
        public void onUpdate()
        {
            m_bReached = false;
            Device.ConstantIterator itReach = m_networkCalculation.getPacketReach();
            while ( itReach.hasNext() && !m_bReached )
                if ( itReach.next() == m_locatedHub.getDevice() )
                    m_bReached = true;
        }
    } );

The other graphic device items (GraphicComputer and GraphicRouter) feature similar code, so the UI highlights all devices that a packet reaches. When the user selects a source computer and enters a target IP address, the UI immediately displays the result. The user does not need to press an additional Calculate button.

You can readily see the reach list’s dependency relationship upon the calculation inputs. But if you look closer, you will see that the dependency goes much deeper. In order to route the packet, the calculation object must visit many devices in the network. Every time it does, it triggers dynamic sentries, which discover dependencies of the calculation results upon the devices of the network itself. All devices visited while routing a packet become the calculation’s precedents.

If the calculation object doesn’t visit a particular device, then no dependency is discovered upon that device; it does not become a precedent. As a result, when the user changes a device that the packet does not reach, the calculation is not re-evaluated. However, when the user changes a device that the packet does reach, then the calculation is re-evaluated, and the new results display immediately.

Give it a try. Build and run the application. The test network appears. Route a packet from one workstation to another. You see that the packet reaches the hub, the other workstation, and the first router. That’s because a hub forwards packets to all attached devices, but the router stops internal packets from reaching the external network. Now, right-click in the white space, select New Computer from the context menu, and drop the new computer near the hub. Right-click the new computer, select New Cable, and connect the cable to the hub. Once the connection is made, the computer becomes highlighted, because the packet reaches that computer too.

Simplify GUI construction

Automatic dependency tracking gives you the power to create highly interactive applications without manually keeping the UI current. Unlike patterns such as Observer, Publish-Subscribe, Document-View, or Model-View-Controller, you do not need to register for notification or route update messages. The automatic dependency tracking system discovers dependencies within the system and invokes update methods to keep dependent data up to date.

When designing an interactive end-user application, look for opportunities to take advantage of dependency relationships. Look for separations between dynamic model attributes and dependent view attributes. Look for dynamic data within the view itself, such as component selection. Look for dynamic view-global modes that influence the dependent attributes of all components. Finally, look for dependencies upon the calculation inputs, where the application’s original intent asserts itself. Using automatic dependency tracking in these situations allows you to exert minimal effort to achieve maximum results.

Michael L. Perry has been a
professional Windows developer for more than seven years and
maintains expertise in COM, Java, XML, SOAP, .Net, and other
technologies. He formed Mallard Software Designs in
1998, where he applies the mathematical rigor of proof —
establishing the correctness of a solution before implementing it
— to software design. Michael applies a cohesive set of rules to
all his software models, one of which — dependency — is the
foundation for automatic dependency tracking.

Source: www.infoworld.com