In this paper we investigate the problem of providing consistency, availability and durability for Web Service transactions. We consider enforcement of integrity constraints in a way that increases the availability while guaranteeing the correctness specified in the constraint. We study hierarchical constraints that offer an opportunity for optimization because of an expensive aggregation calculation required in the enforcement of the constraint. We propose an approach that guarantees enforcement of constraints and also allow the distribution of write operations among many clusters to increase the availability. In our previous work, we proposed a replica update propagation method, called the Buddy System, which guaranteed durability and increased availability of web services. In this paper, we extend the Buddy System to enforce hierarchical data integrity constraints. En terprise web-based transaction systems need to support many concurrent clients simultaneously accessing shared resources. These applications are often developed using a Service Oriented Architecture (SOA). SOA supports the composition of multiple Web Services (WSs) to perform complex business processes. One of the important aspects for SOA applications is to provide a high-level of concurrency; we can think of the measure of the concurrency as the availability of the service to all clients requesting services. A common way to increase the availability is through the replication of the services and their corresponding resources. Often a web farm is used to host multiple replicas of the web application, web services and their resources. Requests are distributed among the replicas. Consistency and durability are often sacrificed to achieve increased availability. The CAP theory [1] [2], stating that distributed database designers can achieve at most two of the properties: consistency (C), availability (A), and partition tolerance (P), has influenced distributed database design in a way that often causes the designer to give up on immediate consistency. In our previous papers we have addressed issues related to increasing availability while still guaranteeing durability and consistency of replicated databases. In this paper we address issues related to maintaining high availability while adding guarantees of correctness by enforcing hierarchical constraints. Traditionally these hierarchical constraints are not enforced by the system due to the expensive run-time cost. In our previous work [3] [4] we provided an extension to the lazy replica update propagation method to reduce the risk of data loss and provide high availability while maintaining consistency. The Buddy System executes a transaction on a primary replica. However, the transaction cannot commit until a secondary replica, “the buddy”, also preserves the effects of the transaction. The rest of the replicas are updated using one of the standard lazy update propagation protocols. The Buddy System provides a guarantee of transactional durability (i.e., effects of the transaction are preserved even if the server, hosting the primary replica crashes before the update can be propagated to the other replicas) and efficient update propagation (i.e., our approach requires the synchronized update between two replicas only, therefore adding minimal overhead to the lazy-replication protocol). The Buddy System uses an application-layer dispatcher [5] to select the buddies based on the data items and the operations of the transactions, the data versions available, and the network characteristics of the WS farm. A limitation of the Buddy System is that integrity constraints that require different classes in the calculation cannot be guaranteed. An example is an address that requires a valid owner in the person class. This integrity constraint cannot be enforced because data mutation can happen on different clusters simultaneously. In this paper we address this limitation. We provide an approach that pulls the Unified Markup Language (UML) constraints expressed in Object Constraint Language (OCL) from the design model. Data is incrementally maintained to allow the dispatcher to enforce the constraint, and, once successful, distribute the requests to several clusters concurrently.