Spring & Hibernate
Short Description
Spring & Hibernate...
Description
Spring is a lightweight open source framework to develop enterprise applications using POJOs as building blocks and dependency injection. Dependency injection (DI) is a programming design pattern and architectural model, sometimes also referred to as inversion of control or IOC, although technically speaking, dependency injection specifically refers to an implementation of a particular form of IOC. Dependancy Injection describes the situation where one object uses a second object to provide a particular capacity. For example, being passed a database connection as an argument to the constructor instead of creating one internally. The term "Dependency injection" is a misnomer, since it is not a dependency that is injected, rather it is a provider of some capability or resource that is injected. There are three common forms of dependency injection: setter-, constructor- and interface-based injection. Dependency injection is a way to achieve loose coupling. Inversion of control (IOC) relates to the way in which an object obtains references to its dependencies. This is often done by a lookup method. The advantage of inversion of control is that it decouples objects from specific lookup mechanisms and implementations of the objects it depends on. As a result, more flexibility is obtained for production applications as well as for testing. There are various IOC containers available currently. Apart from Spring, HiveMind, PicoContainer and avalon also supports IOC. The recent addition to this list is micro container from JBoss. Spring is most widely used framework and more and more enterprise applications previously written on EJBs are now being converted to Spring. One of the chief advantages of the Spring framework is its layered architecture, which allows you to be selective about which of its components you use while also providing a cohesive framework for J2EE application development. The Spring modules are built on top of the core container, which defines how beans are created, configured, and managed, as shown in the following figure. Each of the modules (or components) that comprise the Spring framework can stand on its own or be implemented jointly with one or more of the others. The functionality of each component is as follows: * The core container: The core container provides the essential functionality of the Spring framework. A primary component of the core container is the BeanFactory, an implementation of the Factory pattern. The BeanFactory applies the Inversion of Control (IOC) pattern to separate an application’s configuration and dependency specification from the actual application code. * Spring context: The Spring context is a configuration file that provides context information to the Spring framework. The Spring context includes enterprise services such as JNDI, EJB, e-mail, internalization, validation, and scheduling functionality. * Spring AOP: The Spring AOP module integrates aspect-oriented programming functionality directly into the Spring framework, through its configuration management feature. As a result you can easily AOP-enable any object managed by the Spring framework. The Spring AOP module provides transaction management services for objects in any Spring-based application. With Spring AOP you can incorporate declarative transaction management into your applications without relying on EJB components. * Spring DAO: The Spring JDBC DAO abstraction layer offers a meaningful exception hierarchy for managing the exception handling and error messages thrown by different database vendors. The exception hierarchy simplifies error handling and greatly reduces the amount of exception code you need to write, such as opening and closing connections. Spring DAO’s JDBC-oriented exceptions comply to its generic DAO exception hierarchy. * Spring ORM: The Spring framework plugs into several ORM frameworks to provide its Object Relational tool, including JDO, Hibernate, and iBatis SQL Maps. All of these comply to Spring’s generic transaction and DAO exception hierarchies. * Spring Web module: The Web context module builds on top of the application context module, providing contexts for Web-based applications. As a result, the Spring framework supports
integration with Jakarta Struts. The Web module also eases the tasks of handling multi-part requests and binding request parameters to domain objects. * Spring MVC framework: The Model-View-Controller (MVC) framework is a full-featured MVC implementation for building Web applications. The MVC framework is highly configurable via strategy interfaces and accommodates numerous view technologies including JSP, Velocity, Tiles, iText, and POI.
Spring framework functionality can be used in any J2EE server and most of it also is adaptable to non-managed environments. A central focus of Spring is to allow for reusable business and dataaccess objects that are not tied to specific J2EE services. Such objects can be reused across J2EE environments (Web or EJB), standalone applications, test environments, and so on, without any hassle. The core of Spring is the org.springframework.beans package, designed for working with JavaBeans. This package typically isn't used directly by users, but underpins much Spring functionality. The next higher layer of abstraction is the bean factory. A Spring bean factory is a generic factory that enables objects to be retrieved by name, and which can manage relationships between objects. Bean factories support two modes of object: Singleton: in this case, there's one shared instance of the object with a particular name, which will be retrieved on lookup. This is the default, and most often used, mode. It's ideal for stateless service objects. Prototype or non-singleton: in this case, each retrieval will result in the creation of an independent object. For example, this could be used to allow each caller to have its own distinct object reference.
Because the Spring container manages relationships between objects, it can add value where necessary through services such as transparent pooling for managed POJOs, and support for hot swapping, where the container introduces a level of indirection that allows the target of a reference to be swapped at runtime without affecting callers and without loss of thread safety. One of the beauties of Dependency Injection (discussed shortly) is that all this is possible transparently, with no API involved. As org.springframework.beans.factory.BeanFactory is a simple interface, it can be implemented in different ways. The BeanDefinitionReader interface separates the metadata format from BeanFactory implementations themselves, so the generic BeanFactory implementations Spring provides can be used with different types of metadata. You could easily implement your own BeanFactory or BeanDefinitionReader, although few users find a need to. The most commonly used BeanFactory definitions are: XmlBeanFactory. This
parses a simple, intuitive XML structure defining the classes and properties of named objects. We provide a DTD to make authoring easier. DefaultListableBeanFactory: This provides the ability to parse bean definitions in properties files, and create BeanFactories programmatically. Each bean definition can be a POJO (defined by class name and JavaBean initialisation properties or constructor arguments), or a FactoryBean. The FactoryBean interface adds a level of indirection. Typically this is used to create proxied objects using AOP or other approaches: for example, proxies that add declarative transaction management. This is conceptually similar to EJB interception, but works out much simpler in practice, and is more powerful. BeanFactories can optionally participate in a hierarchy, "inheriting" definitions from their ancestors. This enables the sharing of common configuration across a whole application, while individual resources such as controller servlets also have their own independent set of objects. A Spring ApplicationContext is a subinterface of BeanFactory. Spring users normally configure their applications in XML "bean definition" files. The root of a Spring XML bean definition document is a element. The element contains one or more definitions. We normally specify the class and properties of each bean definition. Beans defined in XML files are lazily loaded, which means that the beans themselves will not be instantiated until they are needed. Spring can handle lists, maps and java.util.Properties. Other advanced container capabilities include: Inner beans, in which a property element contains an anonymous bean definition not visible at top-level scope Post processors: special bean definitions that customize container behavior Method Injection, a form of IoC in which the container implements an abstract method or overrides a concrete method to inject a dependency. This is a more rarely used form of Dependency Injection than Setter or Constructor Injection. However, it can be useful to avoid an explicit container dependency when looking up a new object instance for each invocation, or to allow configuration to vary over time--for example, with the method implementation being backed by a SQL query in one environment and a fil system read in another. Bean factories and application contexts are often associated with a scope defined by the J2EE server or web container, such as ServletContext or EJB. These hooks provided by the J2EE specification generally avoid the need to use a Singleton to bootstrap a bean factory. However, it's trivial to instantiate a BeanFactory programmatically if we wish. For example, we could create the bean factory and get a reference to the business object defined above in the following three lines of code: XmlBeanFactory bf = new XmlBeanFactory(new ClassPathResource("myFile.xml", getClass()));
MyBusinessObject mbo = (MyBusinessObject) bf.getBean("exampleBusinessObject");
This code will work outside an application server: it doesn't even depend on J2EE, as the Spring IoC container is pure Java. Each bean definition can be a POJO (defined by class name and JavaBean initialization properties) or a FactoryBean. The FactoryBean interface adds a level of indirection to the applications built using the Spring framework. You can download a sample spring 2.0 kickstart code from google code site http://code.google.com/p/spring-kickstart/ Spring AOP The first goal of Spring's AOP support is to provide J2EE services to POJOs. Spring AOP is portable between application servers, so there's no risk of vendor lock in. It works in either web or EJB container, and has been used successfully in WebLogic, Tomcat, JBoss, Resin, Jetty, Orion and many other application servers and web containers. Spring AOP supports method interception. Key AOP concepts supported include: Interception: Custom behaviour can be inserted before or after method invocations against any interface or class. This is similar to "around advice" in AspectJ terminology. Introduction: Specifying that an advice should cause an object to implement additional interfaces. This can amount to mixin inheritance. Static and dynamic pointcuts: Specifying the points in program execution at which interception should take place. Static pointcuts concern method signatures; dynamic pointcuts may also consider method arguments at the point where they are evaluated. Pointcuts are defined separately from interceptors, enabling a standard interceptor to be applied in different applications and code contexts. Spring supports both stateful (one instance per advised object) and stateless interceptors (one instance for all advice). Spring does not support field interception. This is a deliberate design decision. I have always felt that field interception violates encapsulation. I prefer to think of AOP as complementing, rather than conflicting with, OOP. In five or ten years time we will probably have travelled a lot farther on the AOP learning curve and feel comfortable giving AOP a seat at the top table of application design. (At that point language-based solutions such as AspectJ may be far more attractive than they are today.) Spring implements AOP using dynamic proxies (where an interface exists) or CGLIB byte code generation at runtime (which enables proxying of classes). Both these approaches work in any application server, or in a standalone environment. Spring was the first AOP framework to implement the AOP Alliance interfaces (www.sourceforge.net/projects/aopalliance). These represent an attempt to define interfaces allowing interoperability of interceptors between AOP frameworks. Spring integrates with AspectJ, providing the ability to seamlessly include AspectJ aspects into Spring applications . Since Spring 1.1 it has been possible to dependency inject AspectJ aspects using the Spring IoC container, just like any Java class. Thus AspectJ aspects can depend on any Spring-managed objects. The integration with the forthcoming AspectJ 5 release is still more exciting, with AspectJ set to provide the ability to dependency inject any POJO using Spring, based on an annotation-driven pointcut. Because Spring advises objects at instance, rather than class loader, level, it is possible to use multiple instances of the same class with different advice, or use unadvised instances along with advised instances. Perhaps the commonest use of Spring AOP is for declarative transaction management. This
builds on the transaction abstraction described above, and can deliver declarative transaction management on any POJO. Depending on the transaction strategy, the underlying mechanism can be JTA, JDBC, Hibernate or any other API offering transaction management. It's also possible to use Spring AOP to implement application-specific aspects. Whether or not you choose to do this depends on your level of comfort with AOP concepts, rather than Spring's capabilities, but it can be very useful. Application-specific aspects can be a powerful way of removing the need for boilerplate code across many methods. Spring AOP integrates transparently with the Spring BeanFactory concept. Code obtaining an object from a Spring BeanFactory doesn't need to know whether or not it is advised. As with any object, the contract will be defined by the interfaces the object implements. The following XML snippet illustrates how to define an AOP proxy: org.springframework.beans.ITestBean txInterceptor target
Note that the class of the bean definition is always the AOP framework's ProxyFactoryBean, although the type of the bean as used in references or returned by the BeanFactory getBean() method will depend on the proxy interfaces. (Multiple proxy methods are supported.) The "interceptorNames" property of the ProxyFactoryBean takes a list of String. (Bean names must be used rather than bean references, as new instances of stateful interceptors may need to be created if the proxy is a "prototype", rather than a singleton bean definition.) The names in this list can be interceptors or pointcuts (interceptors and information about when they should apply). The "target" value in the list above automatically creates an "invoker interceptor" wrapping the target object. It is the name of a bean in the factory that implements the proxy interface. The myTest bean in this example can be used like any other bean in the bean factory. For example, other objects can reference it via elements and these references will be set by Spring IoC. There are a number of ways to set up proxying more concisely, if you don't need the full power of the AOP framework, such as using Java 5.0 annotations to drive transactional proxying without XML metadata, or the ability to use a single piece of XML to apply a consistent proxying strategy to many beans defined in a Spring factory. It's also possible to construct AOP proxies programmatically without using a BeanFactory, although this is more rarely used: TestBean target = new TestBean(); DebugInterceptor di = new DebugInterceptor(); MyInterceptor mi = new MyInterceptor(); ProxyFactory factory = new ProxyFactory(target);
factory.addInterceptor(0, di); factory.addInterceptor(1, mi); // An "invoker interceptor" is automatically added to wrap the target ITestBean tb = (ITestBean) factory.getProxy();
Spring web flow in the latest version of Spring MVC web framework. Spring's documentation is a good place to start with: http://static.springframework.org/springwebflow/docs/1.0.x/reference/index.html Struts could be integrated with Spring. Refer to this sample chapter for the nut and bolts required for integration.
Hibernate is an Open Source Object Relational mapping tool which provides transparent persistence for POJOs. Object-relational mapping is used to map object-oriented programming objects to relational databases managed by Oracle, DB2, Sybase, and other relational database managers (RDBMSs). Hibernate is evolved from the JBoss core persistence framework.
Other popular ORM solutions are iBatis, JDO and TopLink. Advantages of Hibernate: Retains natural object model (transparent) Minimizes Code Does not require a container Model is not tied to persistence implementation. Metadata controlled persistence Transparent - working with the model, not the data access technology Pooling, Caching, Transactions can all be handled outside of our code Object/relational mappings are defined in an XML document. The mapping document is designed to be readable and hand-editable. The mapping language is Java-centric, meaning that mappings are constructed around persistent class declarations, not table declarations. Even though many Hibernate users choose to define XML mappings be hand, a number of tools exist to generate the mapping document, including XDoclet, Middlegen and AndroMDA. Before we move to hibernate mapping here is a basic JDBC connection configured in hibernate. uid pwd jdbc:mysql://localhost/db com.mysql.jdbc.Driver org.hibernate.dialect.MySQLDialect
To use Hibernate-provided JDBC connections, the configuration file requires the following five properties: connection.driver_class -The JDBC connection class for the specific database connection.url -The full JDBC URL to the database connection.username -The username used to connect to the database connection.password -The password used to authenticate the username dialect -The name of the SQL dialect for the database A typical hibernate component mapping looks as this: The mapping definition starts with the hibernate-mapping element. The package attribute sets the default package for unqualified class names in the mapping. With this attribute set, you need only give the class name for other persistent classes listed in the mapping file, such as the Speaker and Attendee classes. To refer to a persistent class outside the given package, you must provide the fully qualified class name within the mapping document. If Hibernate has trouble locating a class because of a missing package on, for instance, a manyto-one element, Hibernate throws a MappingException. This doesn't mean that Hibernate can't find the actual class file, but that it isn't able to navigate from one mapping definition to another. Immediately after the hibernate-mapping tag, you encounter the class tag. The class tag begins the mapping definition for a specific persistent class. The table attribute names the relational table used to store the state of the object. The class element has a number of attributes available, altering how Hibernate persists instances of the class. The id element describes the primary key for the persistent class as well as how the key value is generated. Each persistent class must have an id element declaring the primary key for the relational table. Let's look at the id element:
The name attribute defines the property in your persistent class that will be used to store the primary key value. The id element implies that the Event class has a property also named id: public Long getId(){ return this.id; } public void setId(Long id){ this.id =id; } If the column for the primary key has a different name than your object property, the column attribute is used. For our example's purposes, this column name is uid . The values of the type and unsaved-value attributes depend on the generator used. The generator creates the primary key value for the persistent class. Hibernate provides multiple generator implementations that use various methods to create primary key values. Some implementations increment a value stored in a shared database table, whereas others create hexadecimal strings. Another generator, called assigned , lets you generate and assign the object ID. The assigned generator allows applications to reuse legacy code, such as the UUID generator from an EJB application. A recent introduction is the select generator, which retrieves the primary key value by selecting a value from a database trigger. The generator type you choose determines its behavior based on the underlying database. You've used the native generator class in mapping definitions. native generators provide portability for mapping documents since the framework can determine the generator method supported by the database. Generators using the native class will use identity or sequence columns depending on available database support. If neither method is supported, the native generator falls back to a high/low generator method to create unique primary key values. Databases supporting identity columns include Sybase, MySQL, Microsoft SQL Server, and IBM DB2. Oracle, PostgreSQL, and SAP DB support sequence columns. The native generator returns a short, integer, or long value. You've set the type attribute to long, and the id property in the Event object has a type of java.lang.Long. The value of the type attribute and the property type in the object must be the same. The different types of Id generators supported by hibernate are: 1. increment: It generates identifiers of type long, short or int that are unique only when no other process is inserting data into the same table. It should not the used in the clustered environment. 2. identity: It supports identity columns in DB2, MySQL, MS SQL Server, Sybase and HypersonicSQL. The returned identifier is of type long, short or int. 3. sequence: The sequence generator uses a sequence in DB2, PostgreSQL, Oracle, SAP DB, McKoi or a generator in Interbase. The returned identifier is of type long, short or int 4. hilo: The hilo generator uses a hi/lo algorithm to efficiently generate identifiers of type long, short or int, given a table and column (by default hibernate_unique_key and next_hi respectively) as a source of hi values. The hi/lo algorithm generates identifiers that are unique only for a particular database. Do not use this generator with connections enlisted with JTA or with a user-supplied connection. 5. seqhilo: The seqhilo generator uses a hi/lo algorithm to efficiently generate identifiers of type long, short or int, given a named database sequence. 6. uuid: The uuid generator uses a 128-bit UUID algorithm to generate identifiers of type string, unique within a network (the IP address is used). The UUID is encoded as a string of hexadecimal digits of length 32. 7. guid: It uses a database-generated GUID string on MS SQL Server and MySQL.
8. native: It picks identity, sequence or hilo depending upon the capabilities of the underlying database. 9. assigned: lets the application to assign an identifier to the object before save() is called. This is the default strategy if no element is specified. 10. select: retrieves a primary key assigned by a database trigger by selecting the row by some unique key and retrieving the primary key value. 11. foreign: uses the identifier of another associated object. Usually used in conjunction with a primary key association. The unsaved-value attribute describes the value of the id property for transient instances of this class. The unsaved-value attribute affects how objects are stored. We'll discuss the impact of this attribute later in the article. Properties Property elements for the Event object are similar to the id element: Each property element corresponds to a property in the Event object. The name attribute contains the property name, whereas the type attribute specifies the property object type. The column used to store the property value defaults to the property name. The column attribute overrides this default behavior, as shown in the startDate property. If the type attribute is omitted, Hibernate determines the type using runtime reflection. In certain cases, you must provide the property type, since reflection may not be able to determine the desired type (such as differentiating between the Hibernate DATE and TIMESTAMP types). Valid property types include the Hibernate basic types, such as integer, string, and timestamp, as well as the corresponding Java objects and primitives. However, you aren't limited to basic data types. The property element may also contain the name of a serializable Java class or a user-defined type. You create a new user-defined type by implementing either the org.hibernate.UserType or org.hibernate. CompositeUserType interface. The fully qualified class name of the user type or the serializable Java class is used as the property type value. Hibernate supports different types of association and collection mappings. Download the sample source for each of the following mappings here. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Simple Association (one to one) Simple Reference (many to one) Proxies Basic Collection (one to many) Collection (many to many) Collection (raw data) Top-level Collections Map Entity Map Subclasses Joined Subclasses Components Collections of Components Composite Id Composite Index Ternary Associations
Cascades
If you've worked with relational databases, you've no doubt encountered cascades. Cascades propagate certain operations on a table (such as a delete) to associated tables. (Remember that tables are associated through the use of foreign keys.) Suppose that when you delete an Event, you also want to delete each of the Speaker instances associated with the Event. Instead of having the application code perform the deletion, Hibernate can manage it for you. Hibernate supports ten different types of cascades that can be applied to many-to-one associations as well as collections. The default cascade is none. Each cascade strategy specifies the operation or operations that should be propagated to child entities. The cascade types that you are most likely to use are the following: all -All operations are passed to child entities: save , update , and delete. save-update -Save and update ( and UPDATE , respectively) are passed to child entities. delete -Deletion operations are passed to child entities. delete-orphan -All operations are passed to child entities, and objects no longer associated with the parent object are deleted. The cascade element is added to the desired many-to-one or collection element. For example, the following configuration instructs Hibernate to delete the child Speaker elements when the parent Event is deleted: That's all there is to configuring cascades. It's important to note that Hibernate doesn't pass the cascade off to the database. Instead, the Hibernate service manages the cascades internally. This is necessary because Hibernate has to know exactly which objects are saved, updated, and deleted. Fetching associated objects When an object has one or more associated objects, it's important to consider how associated objects will be loaded. Hibernate 3 offers you two options. You can either retrieve associated objects using an outer join or by using a separate SELECT statement. The fetch attribute allows you to specify which method to use: When an Event instance is loaded, the associated Location instance will be loaded using an outer join. If you wanted to use a separate select, the many-to-one element would look like this: This also applies to child collections, but you can only fetch one collection using a join per persistent object. Additional collections must be fetched using the SELECT strategy. If you're using Hibernate 2, the fetch attribute is not available. Instead, you must use the outer-join attribute for many-to-one associations. (There is no support for retrieving collections using a SELECT in Hibernate 2.) The outer-join attribute takes either a true or false value. Building the SessionFactory Hibernate's SessionFactory interface provides instances of the Session class, which represent connections to the database. Instances of SessionFactory are thread-safe and typically shared throughout an application. Session instances, on the other hand, aren't thread-safe and should
only be used for a single transaction or unit of work in an application. Configuring the SessionFactory The Configuration class kicks off the runtime portion of Hibernate. It's used to load the mapping files and create a SessionFactory for those mapping files. Once these two functions are complete, the Configuration class can be discarded. Creating a Configuration and SessionFactory instance is simple, but you have some options. There are three ways to create and initialize a Configuration object. This first snippet loads the properties and mapping files defined in the hibernate.cfg.xml file and creates the SessionFactory: Configuration cfg =new Configuration(); SessionFactory factory =cfg.configure().buildSessionFactory(); The configure()method tells Hibernate to load the hibernate.cfg.xml file. Without that, only hibernate.properties would be loaded from the classpath. The Configuration class can also load mapping documents programmatically: Configuration cfg =new Configuration(); cfg.addFile("com/manning/hq/ch03/Event.hbm.xml"); Another alternative is to have Hibernate load the mapping document based on the persistent class. This has the advantage of eliminating hard-coded filenames in the source code. For instance, the following code causes Hibernate to look for a file named com/manning/hq/ Event.hbm.xml in the classpath and load the associated class: Configuration cfg =new Configuration(); cfg.addClass(com.manning.hq.ch03.Event.class); Since applications can have tens or hundreds of mapping definitions, listing each definition can quickly become cumbersome. To get around this, the hibernate.cfg.xml file supports adding all mapping files in a JAR file. Suppose your build process creates a JAR file named application.jar, which contains all the classes and mapping definitions required. You then update the hibernate.cfg.xml file: Of course, you can also do this programmatically with the Configuration class: Configuration.addJar(new java.io.File("application.jar")); Keep in mind that the JAR file must be in the application classpath. If you're deploying a web application archive (WAR) file, your application JAR file should be in the /WEB-INF/lib directory in the WAR file. The four methods used to specify mapping definitions to the Hibernate runtime can be combined, depending the requirements for your project. However, once you create the SessionFactory from the Configuration instance, any additional mapping files added to the Configuration instance won't be reflected in the SessionFactory . This means you can't add new persistent classes dynamically. You can use the SessionFactory instance to create Session instances: Session session =factory.openSession();
Instances of the Session class represent the primary interface to the Hibernate framework. They let you persist objects, query persistent objects, and make persistent objects transient. Let's look at persisting objects with Hibernate. Persisting objects Persisting a transient object with Hibernate is as simple as saving it with the Session instance: Event event =new Event(); //populate the event Session session =factory.openSession(); session.save(event); session.flush(); Calling save(...)for the Event instance assigns a generated ID value to the instance and persists the instance. (Keep in mind that Hibernate doesn't set the ID value if the generator type is assigned.) The flush() call forces persistent objects held in memory to be synchronized to the database. Session’s don't immediately write to the database when an object is saved. Instead, the Session queues a number of database writes to maximize performance. If you would like to update an object that is already persistent, the update(...)method is available. Other than the type of SQL operation executed, the difference between save(...)and update(...)is that update(...)doesn't assign an ID value to the object. Because of this minor difference, the Session interface provides the saveOrUpdate(...) methods, which determine the correct operation to execute on the object. How does Hibernate know which method to call on an object? When we described the mapping document, we mentioned the unsaved-value attribute. That attribute comes into play when you use the saveOrUpdate(...)method. Suppose you have a newly created Event instance. The id property is null until it's persisted by Hibernate. If the value is null, Hibernate assumes that the object is transient and assigns a new id value before saving the instance. A non-null id value indicates that the object is already persistent; the object is updated in the database, rather than inserted. You could also use a long primitive to store the primary key value. However, using a primitive type also means that you must update the unsaved-value attribute value to 0, since primitive values can't be null. Here's the necessary code to persist an Event instance: Configuration cfg =new Configuration(); SessionFactory factory =cfg.buildSessionFactory(); Event event =new Event(); //populate the Event instance Session session =factory.openSession(); session.saveOrUpdate(event); session.flush(); session.close(); The first two lines create the SessionFactory after loading the configuration file from the classpath. After the Event instance is created and populated, the Session instance, provided by the SessionFactory , persists the Event. The Session is then flushed and closed, which closes the JDBC connection and performs some internal cleanup. That's all there is to persisting objects. Once you've persisted a number of objects, you'll probably want to retrieve them from the database. Retrieving persistent objects is the topic of the next section.
Retrieving objects Suppose you want to retrieve an Event instance from the database. If you have the Event ID, you can use a Session to return it: Event event =(Event)session.load(Event.class,eventId); session.close(); This code tells Hibernate to return the instance of the Event class with an ID equal to eventId. Notice that you're careful to close the Session, returning the database connection to the pool. There is no need to flush the Session, since you're not persisting objects-only retrieving them. What if you don't know the ID of the object you want to retrieve? This is where HQL enters the picture. The Session interface allows you to create Query objects to retrieve persistent objects. (In Hibernate 2, the Session interface supported a number of overloaded find methods. They were deprecated in Hibernate 3.) HQL statements are object-oriented, meaning that you query on object properties instead of database table and column names. Let’s look at some examples using the Query interface. This example returns a collection of all Event instances. Notice that you don't need to provide a select ...clause when returning entire objects: Query query =session.createQuery("from Event"); List events =query.list(); This query is a little more interesting since we're querying on a property of the Event class: Query query =session.createQuery("from Event where name ="+ "'Opening Presentation'"); List events =query.list(); We've hardcoded the name value in the query, which isn't optimal. Let's rewrite it: Query query =session.createQuery("from Event where name =?", "Opening Presentation"); query.setParameter(0,"Opening Presentation",Hibernate.STRING); List events =query.list(); The question mark in the query string represents the variable, which is similar to the JDBC PreparedStatement interface. The second method parameter is the value bound to the variable, and the third parameter tells Hibernate the type of the value. (The Hibernate class provides constants for the built-in types, such as STRING , INTEGER , and LONG , so they can be referenced programmatically.) One topic we haven't touched on yet is the cache maintained by the Session. The Session cache tends to cause problems for developers new to Hibernate, so we'll talk about it next. The Session cache One easy way to improve performance within the Hibernate service, as well as your applications, is to cache objects. By caching objects in memory, Hibernate avoids the overhead of retrieving them from the database each time. Other than saving overhead when retrieving objects, the Session cache also impacts saving and updating objects. Let's look at a short code listing: Session session =factory.openSession();
Event e =(Event)session.load(Event.class,myEventId); e.setName("New Event Name"); session.saveOrUpdate(e); //later,with the same Session instance Event e =(Event)session.load(Event.class,myEventId); e.setDuration(180); session.saveOrUpdate(e); session.flush(); This code first retrieves an Event instance, which the Session caches internally. It then does the following: updates the Event name, saves or updates the Event instance, retrieves the same Event instance (which is stored in the Session cache), updates the duration of the Event,and saves or updates the Event instance. Finally, you flush the Session. All the updates made to the Event instance are combined into a single update when you flush the Session. This is made possible in part by the Session cache. The Session interface supports a simple instance cache for each object that is loaded or saved during the lifetime of a given Session. Each object placed into the cache is keyed on the class type, such as The Session cache com.manning.hq.ch03.Event, and the primary key value. However, this cache presents some interesting problems for unwary developers. A common problem new developers run into is associating two instances of the same object with the same Session instance, resulting in a NonUniqueObjectException. The following code generates this exception: Session session =factory.openSession(); Event firstEvent =(Event)session.load(Event.class,myEventId); //...perform some operation on firstEvent Event secondEvent =new Event(); secondEvent.setId(myEventId); session.save(secondEvent); This code opens the Session instance, loads an Event instance with a given ID, creates a second Event instance with the same ID, and then attempts to save the second Event instance, resulting in the Non-UniqueObjectException. Any time an object passes through the Session instance, it's added to the Session’s cache. By passes through, we're referring to saving or retrieving the object to and from the database. To see whether an object is contained in the cache, call the Session.contains()method. Objects can be evicted from the cache by calling the Session.evict() method. Let's revisit the previous code, this time evicting the first Event instance: Session session =factory.openSession(); Event firstEvent =(Event)session.load(Event.class,myEventId); //...perform some operation on firstEvent if (session.contains(firstEvent)){ session.evict(firstEvent); } Event secondEvent =new Event(); secondEvent.setId(myEventId); session.save(secondEvent); The code first opens the Session instance and loads an Event instance with a given ID. Next, it determines whether the object is contained in the Session cache and evicts the object if necessary. The code then creates a second Event instance with the same ID and successfully saves the second Event instance.
If you simply want to clear all the objects from the Session cache, you can call the aptly named Session.clear()method. Connection pools Connection pools are a common way to improve application performance. Rather than opening a separate connection to the database for each request, the connection pool maintains a collection of open database connections that are reused. Application servers often provide their own connection pools using a JNDI DataSource, which Hibernate can take advantage of when configured to use a DataSource. If you're running a standalone application or your application server doesn't support connection pools, Hibernate supports three connection pooling services: C3P0, Apache's DBCP library, and Proxool. C3P0 is distributed with Hibernate; the other two are available as separate distributions. When you choose a connection pooling service, you must configure it for your environment. Hibernate supports configuring connection pools from the hibernate.cfg.xml file. The connection.provider_class property sets the pooling implementation: org.hibernate.connection.C3P0ConnectionProvider Once the provider class is set, the specific properties for the pooling service can also be configured from the hibernate.cfg.xml file: 5 ... 1000 As you can see, the prefix for the C3P0 configuration parameters is c3p0. Similarly, the prefixes for DBCP and Proxool are dbcp and proxool, respectively. Specific configuration parameters for each pooling service are available in the documentation with each service. Table 1 lists information for the supported connection pools. Hibernate ships with a basic connection pool suitable for development and testing purposes. However, it should not be used in production. You should always use one of the available connection pooling services, like C3P0, when deploying your application to production. If your preferred connection pool API isn't currently supported by Hibernate, you can add support for it by implementing the org.hibernate.connection.ConnectionProvider interface. Implementing the interface is straightforward.
Connection pooling services Pooling Service Provider Class Configuration Prefix C3PO Apache DBCP Proxool org.hibernate.connecorg.hibernate.connecorg.hibernate.connection.C3P0ConnectionProvider tion.ProxoolConnectionProvider tion.DBCPConnectionProvider
c3p0
Dbcp
proxool
There isn't much to using a connection pool, since Hibernate does most of the work behind the scenes. The next configuration topic we’ll look at deals with transaction management with the Hibernate Transaction API. Transactions Transactions group many operations into a single unit of work. If any operation in the batch fails, all of the previous operations are rolled back, and the unit of work stops. Hibernate can run in many different environments supporting various notions of transactions. Standalone applications and some application servers only support simple JDBC transactions, whereas others support the Java Transaction API (JTA). Hibernate needs a way to abstract the various transaction strategies from the environment. Hibernate has its own Transaction class that is accessible from the Session interface, demonstrated here: Session session =factory.openSession(); Transaction tx =session.beginTransaction(); Event event =new Event(); //...populate the Event instance session.saveOrUpdate(event); tx.commit(); In this example, factory is an initialized SessionFactory instance. This code creates an instance of the org.hibernate.Transaction class and then commits the Transaction instance. Notice that you don't need to call session.flush(). Committing a transaction automatically flushes the Session object. The Event instance is persisted to the database when the transaction is committed. The transaction strategy you use (JDBC or JTA) doesn't matter to the application code-it's set in the Hibernate configuration file. The transaction.factory_class property defines the transaction strategy that Hibernate uses. The default setting is to use JDBC transactions since they're the most common. To use JTA transactions, you need to set the following properties in hibernate.cfg.xml: org.hibernate.transaction.JTATransactionFactory java:comp/UserTransaction The transaction.factory_class property tells Hibernate that you'll be using JTA transactions. Currently, the only other option to JTA is JBDC transactions, which is the default. JTA transactions are retrieved from a JNDI URI, which is specified using the jta.User-Transaction property. If you don't know the URI for your specific application server, the default value is java:comp/UserTransaction. There is some confusion about another property related to JTA transactions: transaction.manager_lookup_class. You only need to specify the manager lookup class when you're using a transactional cache. (We discuss caches in the next section–don't worry.) However, if you don't define the jta.UserTransaction property and transaction.manager_lookup_class is defined, the user transaction name in the lookup factory class is used. If neither of the properties are used, Hibernate falls back to java:comp/UserTransaction.
What's the benefit of using JTA transactions? JTA transactions are useful if you have multiple transactional resources, such as a database and a message queue. JTA allows you to treat the disparate transactions as a single transaction. Combining multiple transactions also applies within Hibernate. If you attempt to create multiple transactions from the same Session instance, all of the operations are batched into the first transaction. Let's look at an example that includes two transactions: Transaction tx0 =session.beginTransaction(); Event event =new Event(); //...populate the event instance session.saveOrUpdate(event); Transaction tx1 =session.beginTransaction(); Location location =new Location(); //...populate the Location instance session.saveOrUpdate(location); tx0.commit(); tx1.commit(); This example begins by creating a new transaction. The second use of session.beginTransaction()just returns the first transaction instance. session.saveOrUpdate(location)commits the first transaction, and tx0.commit()recommits the first transaction. Although you explicitly create two Transaction objects, only one is used. Of course, this creates a problem. Let's assume you have a Session object being used by two application threads. The first application thread begins the JTA transaction and starts adding objects. Meanwhile, the second thread, using the same transaction, deletes an object and commits the transaction. Where does this leave the first thread? The first thread won't be committed, which is what you'd expect. The problem is that this issue can be hard to debug, bringing up an important point: Sessions should be used by only one application thread at a time. This is a common concern in web applications, which are multithreaded by their very nature. Cache providers As we mentioned earlier, caching is a common method used to improve application performance. Caching can be as simple as having a class store frequently used data, or a cache can be distributed among multiple computers. The logic used by caches can also vary widely, but most use a simple least recently used (LRU) algorithm to determine which objects should be removed from the cache after a configurable amount of time. Before you get confused, let's clarify the difference between the Session–level cache, also called the first–level cache, and what this section covers. The Session–level cache stores object instances for the lifetime of a given Session instance. The caching services described in this section cache data outside of the lifetime of a given Session. Another way to think about the difference is that the Session cache is like a transactional cache that only caches the data needed for a given operation or set of operations, whereas a second–level cache is an application-wide cache. By default, Hibernate supports four different caching services. EHCache (Easy Hibernate Cache) is the default service. If you prefer to use an alternative cache, you need to set the cache.provider_class property in the hibernate.cfg.xml file: org.hibernate.cache.OSCacheProvider
This snippet sets the cache provider to the OSCache caching service.
Caching Services Supported by Hibernate Caching Provider Class Type Service EHCache org.hibernate.cache.EhCacheProvider Memory,disk OSCache org.hibernate.cache.OSCacheProvider Memory,disk SwarmCache org.hibernate.cache.SwarmCacheProvider Clustered TreeCache org.hibernate.cache.TreeCacheProvider Clustered The caching services support the caching of classes as well as collections belonging to persistent classes. For instance, suppose you have a large number of Attendee instances associated with a particular Event instance. Instead of repeatedly fetching the collection of Attendee s, you can cache it. Caching for classes and collections is configured in the mapping files, with the cache element: ... Collections can also be cached: ... Once you've chosen a caching service, what do you, the developer, need to do differently to take advantage of cached objects? Thankfully, you don't have to do anything. Hibernate works with the cache behind the scenes, so concerns about retrieving an outdated object from the cache can be avoided. You only need to select the correct value for the usage attribute. The usage attribute specifies the caching concurrency strategy used by the underlying caching service. The previous configuration sets the usage to read–write , which is desirable if your application needs to update data. Alternatively, you may use the nonstrict–read–write strategy if it's unlikely two separate transaction threads could update the same object. If a persistent object is never updated, only read from the database, you may specify set usage to read-only. Some caching services, such as the JBoss TreeCache, use transactions to batch multiple operations and perform the batch as a single unit of work. If you choose to use a transactional cache, you may set the usage attribute to transactional to take advantage of this feature. If you happen to be using a transactional cache, you'll also need to set the transaction.manager_lookup_class mentioned in the previous section. The supported caching strategies differ based on the service used.
Supported Caching Service Strategies
Caching Service EHCache OSCache SwarmCache TreeCache
Readonly Y Y Y Y
Readwrite Y Y Y N
Nonstrict-readwrite Y Y Y N
Transactional N N N Y
Clearly, the caching service you choose will depend on your application requirements and environment. Next, let's look at configuring EHCache. Configuring EHCache By now you're probably tired of reading about configuring Hibernate, but EHCache is pretty simple. It's a single XML file, placed in a directory listed in your classpath. You'll probably want to put the ehcache.xml file in the same directory as the hibernate.cfg.xml file. ehcache.xml file In this example, the diskStore property sets the location of the disk cache store. Then, the listing declares two caches. The defaultCache element contains the settings for all cached objects that don't have a specific cache element: the number of cached objects held in memory, whether objects in the cache expire (if eternal is true , then objects don't expire), the number of seconds an object should remain the cache after it was last accessed, the number of seconds an object should remain in the cache after it was created, and whether objects exceeding maxElementsInMemory should be spooled to the diskStore. Next, for custom settings based on the class, the code defines a cache element with the fully qualified class name listed in the name attribute. (This listing only demonstrates a subset of the available configuration for EHCache. Please refer to the documentation found at http:// ehcache.sf.net for more information.) Inheritance Inheritance is a fundamental concept of object-oriented languages. Through inheritance, objects can inherit the state and behavior of their ancestor, or superclass. The most common use of object inheritance in applications is to create a generic base type with one or more specialized subclasses. Persisting a class hierarchy can be difficult, since each hierarchy can have its own unique requirements. To address the problems found in hierarchy persistence, Hibernate supports three different inheritance persistence strategies: Table per class hierarchy Table per subclass
Table per concrete class Each mapping strategy is incrementally more complicated. In the following sections, we‘ll discuss the first two inheritance strategies. We've never needed to use the third, and most complicated, strategy. Table per class hierarchy This strategy is the most basic and easiest to use. All the classes in the hierarchy are stored in a single table. Suppose you have the base Event class, with ConferenceEvent and NetworkingEvent as subclasses. The mapping definition for this hierarchy is shown in listing 6. Listing 6. Table per class hierarchy mapping ... ... ... We've introduced a few new features in the mapping definition. The most important is the inclusion of the discriminator element. The discriminator column is what Hibernate uses to tell the different sub-classes apart when retrieving classes from the database. If you don't specify a discriminator value, Hibernate uses the object's class name. The discriminator element in the example mapping tells Hibernate to look in the event_type column for a string describing the class type. The discriminator is only a column in the relational table-you don't need to define it as a property in your Java object. The subclass element contains the properties and associations belonging to the subclass. Any association element is allowed between sub- class tags. You can't have an id element or a nested subclass element. The table per class hierarchy strategy requires a single table, events, to store the three types of Event instances. As you can see, one table contains the fields for all the objects in the hierarchy. The only obvious limitation is that your subclasses can‘t have columns declared as NOT NULL. Subclasses can't have non-null attributes because inserting the superclass, which doesn‘t even have the non-null attribute, will cause a null column violation when it‘s inserted into the database. The next inheritance strategy, table per sub-class, doesn't have this limitation. Table per subclass Instead of putting all the classes into a single table, you can choose to put each subclass into its own table. This approach eliminates the discriminator column and introduces a one-to-one
mapping from the sub-class tables to the superclass table. The mapping definition for this strategy is shown in listing 7. Table-per-subclass mapping ... ... The joined-subclass element can contain the same elements as the subclass element. The key element contains the primary key associa-tion to the superclass, Event. Table per concrete class Since this association can refer to any class in the Event hierarchy, the association is referred to as a polymorphic association. You can also create a concrete association by giving the name of the specific subclass:
Q. Explain DI or IOC pattern. A: Dependency injection (DI) is a programming design pattern and architectural model, sometimes also referred to as inversion of control or IOC, although technically speaking, dependency injection specifically refers to an implementation of a particular form of IOC. Dependancy Injection describes the situation where one object uses a second object to provide a particular capacity. For example, being passed a database connection as an argument to the constructor instead of creating one internally. The term "Dependency injection" is a misnomer, since it is not a dependency that is injected, rather it is a provider of some capability or resource that is injected. There are three common forms of dependency injection: setter-, constructor- and interface-based injection. Dependency injection is a way to achieve loose coupling. Inversion of control (IOC) relates to the way in which an object obtains references to its dependencies. This is often done by a lookup method. The advantage of inversion of control is that it decouples objects from specific lookup mechanisms and implementations of the objects it depends on. As a result, more flexibility is obtained for production applications as well as for testing. Q. What are the different IOC containers available? A. Spring is an IOC container. Other IOC containers are HiveMind, Avalon, PicoContainer.
Q. What are the different types of dependency injection. Explain with examples. A: There are two types of dependency injection: setter injection and constructor injection. Setter Injection: Normally in all the java beans, we will use setter and getter method to set and get the value of property as follows: public class namebean { String name; public void setName(String a) { name = a; } public String getName() { return name; } } We will create an instance of the bean 'namebean' (say bean1) and set property as bean1.setName("tom"); Here in setter injection, we will set the property 'name' in spring configuration file as showm below: tom The subelement sets the 'name' property by calling the set method as setName("tom"); This process is called setter injection. To set properties that reference other beans , subelement of is used as shown below, Constructor injection: For constructor injection, we use constructor with parameters as shown below, public class namebean { String name; public namebean(String a) { name = a; } } We will set the property 'name' while creating an instance of the bean 'namebean' as namebean bean1 = new namebean("tom"); Here we use the element to set the the property by constructor injection as My Bean Value
Q. What is spring? What are the various parts of spring framework? What are the different persistence frameworks which could be used with spring? A. Spring is an open source framework created to address the complexity of enterprise application development. One of the chief advantages of the Spring framework is its layered architecture, which allows you to be selective about which of its components you use while also providing a cohesive framework for J2EE application development. The Spring modules are built on top of the core container, which defines how beans are created, configured, and managed, as shown in the following figure. Each of the modules (or components) that comprise the Spring framework can stand on its own or be implemented jointly with one or more of the others. The functionality of each component is as follows: The core container: The core container provides the essential functionality of the Spring framework. A primary component of the core container is the BeanFactory, an implementation of the Factory pattern. The BeanFactory applies the Inversion of Control (IOC) pattern to separate an application’s configuration and dependency specification from the actual application code. Spring context: The Spring context is a configuration file that provides context information to the Spring framework. The Spring context includes enterprise services such as JNDI, EJB, e-mail, internalization, validation, and scheduling functionality. Spring AOP: The Spring AOP module integrates aspect-oriented programming functionality directly into the Spring framework, through its configuration management feature. As a result you can easily AOP-enable any object managed by the Spring framework. The Spring AOP module provides transaction management services for objects in any Spring-based application. With Spring AOP you can incorporate declarative transaction management into your applications without relying on EJB components. Spring DAO: The Spring JDBC DAO abstraction layer offers a meaningful exception hierarchy for managing the exception handling and error messages thrown by different database vendors. The exception hierarchy simplifies error handling and greatly reduces the amount of exception code you need to write, such as opening and closing connections. Spring DAO’s JDBC-oriented exceptions comply to its generic DAO exception hierarchy. Spring ORM: The Spring framework plugs into several ORM frameworks to provide its Object Relational tool, including JDO, Hibernate, and iBatis SQL Maps. All of these comply to Spring’s generic transaction and DAO exception hierarchies. Spring Web module: The Web context module builds on top of the application context module, providing contexts for Web-based applications. As a result, the Spring framework supports integration with Jakarta Struts. The Web module also eases the tasks of handling multi-part requests and binding request parameters to domain objects. Spring MVC framework: The Model-View-Controller (MVC) framework is a full-featured MVC implementation for building Web applications. The MVC framework is highly configurable via strategy interfaces and accommodates numerous view technologies including JSP, Velocity, Tiles, iText, and POI. Q. What is AOP? How does it relate with IOC? What are different tools to utilize AOP? A: Aspect-oriented programming, or AOP, is a programming technique that allows programmers to modularize crosscutting concerns, or behavior that cuts across the typical divisions of responsibility, such as logging and transaction management. The core construct of AOP is the aspect, which encapsulates behaviors affecting multiple classes into reusable modules. AOP and IOC are complementary technologies in that both apply a modular approach to complex problems in enterprise application development. In a typical object-oriented development approach you might implement logging functionality by putting logger statements in all your methods and Java classes. In an AOP approach you would instead modularize the logging services and apply them declaratively to the components that required logging. The advantage, of course, is that the Java class doesn't need to know about the existence of the logging service or concern itself with any related code. As a result, application code written using Spring AOP is
loosely coupled. The best tool to utilize AOP to its capability is AspectJ. However AspectJ works at he byte code level and you need to use AspectJ compiler to get the aop features built into your compiled code. Nevertheless AOP functionality is fully integrated into the Spring context for transaction management, logging, and various other features. In general any AOP framework control aspects in three possible ways: Joinpoints: Points in a program's execution. For example, joinpoints could define calls to specific methods in a class Pointcuts: Program constructs to designate joinpoints and collect specific context at those points Advices: Code that runs upon meeting certain conditions. For example, an advice could log a message before executing a joinpoint Q. What are the advantages of spring framework? A. 1. Spring has layed architecture. Use what you need and leave you don't need now. 2. Spring Enables POJO Programming. There is no behind the scene magic here. POJO programming enables continous integration and testability. 3. Dependency Injection and Inversion of Control Simplifies JDBC (Read the first question.) 4. Open source and no vendor lock-in. Q. Can you name a tool which could provide the initial ant files and directory structure for a new spring project. A: Appfuse or equinox. Q. Explain BeanFactory in spring. A: Bean factory is an implementation of the factory design pattern and its function is to create and dispense beans. As the bean factory knows about many objects within an application, it is able to create association between collaborating objects as they are instantiated. This removes the burden of configuration from the bean and the client. There are several implementation of BeanFactory. The most useful one is "org.springframework.beans.factory.xml.XmlBeanFactory" It loads its beans based on the definition contained in an XML file. To create an XmlBeanFactory, pass a InputStream to the constructor. The resource will provide the XML to the factory. BeanFactory factory = new XmlBeanFactory(new FileInputStream("myBean.xml")); This line tells the bean factory to read the bean definition from the XML file. The bean definition includes the description of beans and their properties. But the bean factory doesn't instantiate the bean yet. To retrieve a bean from a 'BeanFactory', the getBean() method is called. When getBean() method is called, factory will instantiate the bean and begin setting the bean's properties using dependency injection. myBean bean1 = (myBean)factory.getBean("myBean"); Q. Explain the role of ApplicationContext in spring. A. While Bean Factory is used for simple applications, the Application Context is spring's more advanced container. Like 'BeanFactory' it can be used to load bean definitions, wire beans together and dispense beans upon request. It also provide 1) a means for resolving text messages, including support for internationalization. 2) a generic way to load file resources. 3) events to beans that are registered as listeners. Because of additional functionality, 'Application Context' is preferred over a BeanFactory. Only when the resource is scarce like mobile devices, 'BeanFactory' is used. The three commonly used implementation of 'Application Context' are 1. ClassPathXmlApplicationContext : It Loads context definition from an XML file located in the classpath, treating context definitions as classpath resources. The application context is loaded
from the application's classpath by using the code ApplicationContext context = new ClassPathXmlApplicationContext("bean.xml"); 2. FileSystemXmlApplicationContext : It loads context definition from an XML file in the filesystem. The application context is loaded from the file system by using the code ApplicationContext context = new FileSystemXmlApplicationContext("bean.xml"); 3. XmlWebApplicationContext : It loads context definition from an XML file contained within a web application. Q. How does Spring supports DAO in hibernate? A. Spring’s HibernateDaoSupport class is a convenient super class for Hibernate DAOs. It has handy methods you can call to get a Hibernate Session, or a SessionFactory. The most convenient method is getHibernateTemplate(), which returns a HibernateTemplate. This template wraps Hibernate checked exceptions with runtime exceptions, allowing your DAO interfaces to be Hibernate exception-free. Example: public class UserDAOHibernate extends HibernateDaoSupport { public User getUser(Long id) { return (User) getHibernateTemplate().get(User.class, id); } public void saveUser(User user) { getHibernateTemplate().saveOrUpdate(user); if (log.isDebugEnabled()) { log.debug(“userId set to: “ + user.getID()); } } public void removeUser(Long id) { Object user = getHibernateTemplate().load(User.class, id); getHibernateTemplate().delete(user); } }
Q. What are the id generator classes in hibernate? A: increment: It generates identifiers of type long, short or int that are unique only when no other process is inserting data into the same table. It should not the used in the clustered environment. identity: It supports identity columns in DB2, MySQL, MS SQL Server, Sybase and HypersonicSQL. The returned identifier is of type long, short or int. sequence: The sequence generator uses a sequence in DB2, PostgreSQL, Oracle, SAP DB, McKoi or a generator in Interbase. The returned identifier is of type long, short or int hilo: The hilo generator uses a hi/lo algorithm to efficiently generate identifiers of type long, short or int, given a table and column (by default hibernate_unique_key and next_hi respectively) as a source of hi values. The hi/lo algorithm generates identifiers that are unique only for a particular database. Do not use this generator with connections enlisted with JTA or with a user-supplied connection. seqhilo: The seqhilo generator uses a hi/lo algorithm to efficiently generate identifiers of type long, short or int, given a named database sequence. uuid: The uuid generator uses a 128-bit UUID algorithm to generate identifiers of type string, unique within a network (the IP address is used). The UUID is encoded as a string of hexadecimal digits of length 32. guid: It uses a database-generated GUID string on MS SQL Server and MySQL. native: It picks identity, sequence or hilo depending upon the capabilities of the underlying
database. assigned: lets the application to assign an identifier to the object before save() is called. This is the default strategy if no element is specified. select: retrieves a primary key assigned by a database trigger by selecting the row by some unique key and retrieving the primary key value. foreign: uses the identifier of another associated object. Usually used in conjunction with a primary key association. Q. How is a typical spring implementation look like? A. For a typical Spring Application we need the following files 1. An interface that defines the functions. 2. An Implementation that contains properties, its setter and getter methods, functions etc., 3. A XML file called Spring configuration file. 4. Client program that uses the function.
Q. How do you define hibernate mapping file in spring? A. Add the hibernate mapping file entry in mapping resource inside Spring’s applicationContext.xml file in the web/WEB-INF directory. org/appfuse/model/User.hbm.xml
Q. How do you configure spring in a web application? A. It is very easy to configure any J2EE-based web application to use Spring. At the very least, you can simply add Spring’s ContextLoaderListener to your web.xml file: org.springframework.web.context.ContextLoaderListener
Q. Can you have xyz.xml file instead of applicationcontext.xml? A. ContextLoaderListener is a ServletContextListener that initializes when your webapp starts up. By default, it looks for Spring’s configuration file at WEB-INF/applicationContext.xml. You can change this default value by specifying a element named “contextConfigLocation.” Example: org.springframework.web.context.ContextLoaderListener contextConfigLocation /WEB-INF/xyz.xml
Q. How do you configure your database driver in spring? A. Using datasource "org.springframework.jdbc.datasource.DriverManagerDataSource". Example: org.hsqldb.jdbcDriver jdbc:hsqldb:db/appfuse sa
Q. How can you configure JNDI instead of datasource in spring applicationcontext.xml? A. Using "org.springframework.jndi.JndiObjectFactoryBean". Example: java:comp/env/jdbc/appfuse
Q. What are the key benifits of Hibernate? A: These are the key benifits of Hibernate: Transparent persistence based on POJOs without byte code processing Powerful object-oriented hibernate query language Descriptive O/R Mapping through mapping file. Automatic primary key generation Hibernate cache : Session Level, Query and Second level cache. Performance: Lazy initialization, Outer join fetching, Batch fetching Q. What is hibernate session and session factory? How do you configure sessionfactory in spring configuration file? A. Hibernate Session is the main runtime interface between a Java application and Hibernate. SessionFactory allows applications to create hibernate session by reading hibernate configurations file hibernate.cfg.xml. // Initialize the Hibernate environment Configuration cfg = new Configuration().configure(); // Create the session factory SessionFactory factory = cfg.buildSessionFactory(); // Obtain the new session object
Session session = factory.openSession(); The call to Configuration().configure() loads the hibernate.cfg.xml configuration file and initializes the Hibernate environment. Once the configuration is initialized, you can make any additional modifications you desire programmatically. However, you must make these modifications prior to creating the SessionFactory instance. An instance of SessionFactory is typically created once and used to create all sessions related to a given context. The main function of the Session is to offer create, read and delete operations for instances of mapped entity classes. Instances may exist in one of three states: transient: never persistent, not associated with any Session persistent: associated with a unique Session detached: previously persistent, not associated with any Session A Hibernate Session object represents a single unit-of-work for a given data store and is opened by a SessionFactory instance. You must close Sessions when all work for a transaction is completed. The following illustrates a typical Hibernate session: Session session = null; UserInfo user = null; Transaction tx = null; try { session = factory.openSession(); tx = session.beginTransaction(); user = (UserInfo)session.load(UserInfo.class, id); tx.commit(); } catch(Exception e) { if (tx != null) { try { tx.rollback(); } catch (HibernateException e1) { throw new DAOException(e1.toString()); } } throw new DAOException(e.toString()); } finally { if (session != null) { try { session.close(); } catch (HibernateException e) { } } } Q. What is the difference between hibernate get and load methods? A. The load() method is older; get() was added to Hibernate’s API due to user request. The difference is trivial: The following Hibernate code snippet retrieves a User object from the database: User user = (User) session.get(User.class, userID); The get() method is special because the identifier uniquely identifies a single instance of a class. Hence it’s common for applications to use the identifier as a convenient handle to a persistent object. Retrieval by identifier can use the cache when retrieving an object, avoiding a database hit if the object is already cached. Hibernate also provides a load() method: User user = (User) session.load(User.class, userID); If load() can’t find the object in the cache or database, an exception is thrown. The load() method never returns null. The get() method returns null if the object can’t be found. The load() method may return a proxy instead of a real persistent instance. A proxy is a placeholder instance of a runtime-generated subclass (through cglib or Javassist) of a mapped persistent class, it can initialize itself if any method is called that is not the mapped database identifier getter-method. On the other hand, get() never returns a proxy.
Choosing between get() and load() is easy: If you’re certain the persistent object exists, and nonexistence would be considered exceptional, load() is a good option. If you aren’t certain there is a persistent instance with the given identifier, use get() and test the return value to see if it’s null. Using load() has a further implication: The application may retrieve a valid reference (a proxy) to a persistent instance without hitting the database to retrieve its persistent state. So load() might not throw an exception when it doesn’t find the persistent object in the cache or database; the exception would be thrown later, when the proxy is accessed. Q. What type of transaction management is supported in hibernate? A. Hibernate communicates with the database via a JDBC Connection; hence it must support both managed and non-managed transactions. non-managed in web containers: managed in application server using JTA:
Q. What is lazy loading and how do you achieve that in hibernate? A. Lazy setting decides whether to load child objects while loading the Parent Object. You need to specify parent class.Lazy = true in hibernate mapping file. By default the lazy loading of the child objects is true. This make sure that the child objects are not loaded unless they are explicitly invoked in the application by calling getChild() method on parent. In this case hibernate issues a fresh database call to load the child when getChild() is actully called on the Parent object. But in some cases you do need to load the child objects when parent is loaded. Just make the lazy=false and hibernate will load the child when parent is loaded from the database. Examples: Address child of User class can be made lazy if it is not required frequently. But you may need to load the Author object for Book parent whenever you deal with the book for online bookshop. Hibernate does not support lazy initialization for detached objects. Access to a lazy association outside of the context of an open Hibernate session will result in an exception. Q. What are the different fetching strategy in Hibernate? A. Hibernate3 defines the following fetching strategies: Join fetching - Hibernate retrieves the associated instance or collection in the same SELECT, using an OUTER JOIN. Select fetching - a second SELECT is used to retrieve the associated entity or collection. Unless you explicitly disable lazy fetching by specifying lazy="false", this second select will only be executed when you actually access the association. Subselect fetching - a second SELECT is used to retrieve the associated collections for all
entities retrieved in a previous query or fetch. Unless you explicitly disable lazy fetching by specifying lazy="false", this second select will only be executed when you actually access the association.Batch fetching - an optimization strategy for select fetching - Hibernate retrieves a batch of entity instances or collections in a single SELECT, by specifying a list of primary keys or foreign keys. For more details read short primer on fetching strategy at http://www.hibernate.org/315.html Q. What are different types of cache hibernate supports ? A. Caching is widely used for optimizing database applications. Hibernate uses two different caches for objects: first-level cache and second-level cache. First-level cache is associated with the Session object, while second-level cache is associated with the Session Factory object. By default, Hibernate uses first-level cache on a per-transaction basis. Hibernate uses this cache mainly to reduce the number of SQL queries it needs to generate within a given transaction. For example, if an object is modified several times within the same transaction, Hibernate will generate only one SQL UPDATE statement at the end of the transaction, containing all the modifications. To reduce database traffic, second-level cache keeps loaded objects at the Session Factory level between transactions. These objects are available to the whole application, not just to the user running the query. This way, each time a query returns an object that is already loaded in the cache, one or more database transactions potentially are avoided. In addition, you can use a query-level cache if you need to cache actual query results, rather than just persistent objects. The query cache should always be used in conjunction with the secondlevel cache. Hibernate supports the following open-source cache implementations out-of-thebox: EHCache is a fast, lightweight, and easy-to-use in-process cache. It supports read-only and read/write caching, and memory- and disk-based caching. However, it does not support clustering. OSCache is another open-source caching solution. It is part of a larger package, which also provides caching functionalities for JSP pages or arbitrary objects. It is a powerful and flexible package, which, like EHCache, supports read-only and read/write caching, and memory- and disk-based caching. It also provides basic support for clustering via either JavaGroups or JMS. SwarmCache is a simple cluster-based caching solution based on JavaGroups. It supports read-only or nonstrict read/write caching (the next section explains this term). This type of cache is appropriate for applications that typically have many more read operations than write operations. JBoss TreeCache is a powerful replicated (synchronous or asynchronous) and transactional cache. Use this solution if you really need a true transaction-capable caching architecture. Commercial Tangosol Coherence cache. Q. What are the different caching strategies? A. The following four caching strategies are available: Read-only: This strategy is useful for data that is read frequently but never updated. This is by far the simplest and best-performing cache strategy. Read/write: Read/write caches may be appropriate if your data needs to be updated. They carry more overhead than read-only caches. In non-JTA environments, each transaction should be completed when Session.close() or Session.disconnect() is called. Nonstrict read/write: This strategy does not guarantee that two transactions won't simultaneously modify the same data. Therefore, it may be most appropriate for data that is read often but only occasionally modified. Transactional: This is a fully transactional cache that may be used only in a JTA environment. Q. How do you configure 2nd level cach in hibernate? A. To activate second-level caching, you need to define the hibernate.cache.provider_class
property in the hibernate.cfg.xml file as follows: org.hibernate.cache.EHCacheProvider By default, the second-level cache is activated and uses the EHCache provider. To use the query cache you must first enable it by setting the property hibernate.cache.use_query_cache to true in hibernate.properties. Q. What is the difference between sorted and ordered collection in hibernate? A. A sorted collection is sorted in-memory using java comparator, while order collection is ordered at the database level using order by clause. Q. What are the types of inheritence models and describe how they work like vertical inheritence and horizontal? A. There are three types of inheritance mapping in hibernate : Example: Let us take the simple example of 3 java classes. Class Manager and Worker are inherited from Employee Abstract class. 1. Table per concrete class with unions : In this case there will be 2 tables. Tables: Manager, Worker [all common attributes will be duplicated] 2. Table per class hierarchy: Single Table can be mapped to a class hierarchy. There will be only one table in database called 'Employee' that will represent all the attributes required for all 3 classes. But it needs some discriminating column to differentiate between Manager and worker; 3. Table per subclass: In this case there will be 3 tables represent Employee, Manager and Worker
View more...
Comments