Home | Ciber
Knowledge page of Ciber Netherlands

Enterprise Integration: ActiveMQ network of brokers

Introduction

ActiveMQ is a powerful and widely used open source messaging server, more commonly known as "message broker". It’s being used often in Java based enterprise integration solutions, and provides "Enterprise features" like clustering, multiple message stores, and ability to use any database as a JMS persistence provider besides VM, cache, and journal persistency. For more information on ActiveMQ, visit the website [1] and check the documentation. In general, JMS message brokers often are the heart of (enterprise) integration solutions.

Intermezzo

There is a commercially supported version available at FuseSource[2], as part of the FuseSource stack consisting of an Enterprise Service Bus (based on ServiceMix), Message Broker (based on ActiveMQ), Mediation Router (based on Apache Camel) and web services stack (based on CXF).

Besides these products, FuseSource offers other products such as Fuse HQ, an infrastructure monitoring tool and FuseIDE, an eclipse base visual development tool for designing Camel routes.

 

Concept: Network of Brokers

One of the more interesting features of ActiveMQ is the ability to create a network of brokers, thus connecting and clustering multiple instances of the message broker on different servers in a network (or network of networks, on the internet, or somewhere in the cloud)

Networks of message brokers [3] in ActiveMQ work quite differently than more familiar models such as that of physical networks. In this article I will try to explain each component piece progressively moving up in complexity from a single broker through to a full blown network. At the end you should have basic understanding of how these networks behave and be able to reason about the interactions across different topologies. Please note that this is not intended as an ActiveMQ tutorial. Thus, no code examples or configuration snippets this timeÖ

 

Concept: Producer-Consumer

One of the key things in understanding broker-based messaging is that the production, or sending of a message, is disconnected from the consumption of that message. The broker acts as an intermediary, serving to make the method by which a method is consumed as well as the route that the message has travelled orthogonal to its production. You donít really need to understand the entire postal system to know that when you post a letter in the big orange PostNL (or whatever it is called today) box, it eventually will arrive in the little box at the front of the recipientís house. Same idea applies here.

001-producerConsumer1

Producer and consumer are unaware of each other; only the broker they are connected to.

Connections are shown in the direction of where they were established from (i.e. Consumer connects to Broker).

Out of the box when a standard message is sent to a queue from a producer, it is sent to the broker, which persists it in its message store. By default this is in KahaDB, but it can configured to be stored in memory, which buys performance at the cost of reliability. Once the broker has confirmation that the message has been persisted in the journal (the terms journal and message store are often used interchangeably), it responds with an acknowledgement back to the producer. The thread sending the message from the producer is blocked at this time.

clip_image002

On the consumption side, when a message listener is registered or a call to receive() is made, the broker creates a subscription to that queue. Messages are fetched from the message store and passed to the consumer; itís usually done in batches, and the fetching is a lot more complex than simply read from disk, but thatís the general idea.

The consumer will usually at this stage process the message and subsequently acknowledge that the message has been consumed. The broker then updates the message store marking that message as consumed, or just deleting it (this depends on the persistence mechanism).

clip_image003

So what happens when there are more than one consumer on a queue? All things being equal, and ignoring consumer priorities, the broker will in this case hand out incoming messages in a round-robin manner to each subscriber.

clip_image004

 

Concept: Store-and-forward

Now letís scale up to two brokers, Broker1 and Broker2. In ActiveMQ a network of brokers is set up by connecting a networkConnector to a transportConnector (think of it as a socket listening on a port). A networkConnector is an outbound connection from one broker to another.

clip_image005

When a subscription is made to a queue on Broker2, that broker tells the other brokers that it knows about (in our case, just Broker1) that it is interested in that queue; another subscription is now made on Broker1 with Broker2 as the consumer. As far as an ActiveMQ broker is concerned there is no difference between a standard client that consumes messages and another broker acting on behalf of a client. They are treated in exactly the same manner.

So now that Broker1 sees a subscription from Broker2, what happens? The result is a hybrid of the two producer and consumer behaviors. Broker1 is the producer, and Broker2 the consumer. Messages are fetched from Broker1?s message store, passed to Broker2. Broker2 processes the message by ďstoreĒ-ing it in its journal, and acknowledges consumption of that message. Broker1 then marks the message as consumed.

clip_image006

The simple consume case applies as Broker2 ďforwardsĒ the message to its consumers, as if the message was produced directly into it. Neither the producer nor consumer are aware that a network of brokers exists, it is orthogonal to their functionality Ė a key driver of this style of messaging.

Concept: Local and remote consumers

It has already been noted that as far as a broker is concerned, all subscriptions are equal. To it there is no difference between a local ďrealĒ consumer, and another broker that is going to forward those messages on. Hence incoming messages will be handed out round-robin as usual. In case of 2 consumers (Consumer1 on Broker1, and Consumer2 on Broker2) if messages are produced to Broker1, both consumers will each receive the same number of messages.

clip_image007

A networkConnector is unidirectional by default, which means that the broker initiating the connector acts as a client, forwarding its subscriptions. Broker2 in this case subscribes on behalf of its consumers to Broker1. Broker2 however will not be made aware of subscriptions on Broker1. networkConnectors can however be made duplex, such that subscriptions are passed in both directions.

So letís take it one step further with a network that demonstrates why it is a bad idea to connect brokers to each other in an ad-hoc manner. Letís add Broker3 into the mix such that it connects into Broker1, and Broker2 sets up a second networkConnector into Broker3. All networkConnectors are set up as duplex.

clip_image008

This is a common approach people take when they first encounter broker networks and want to connect a number of brokers to each other, as they are naturally used to the internet model of network behaviour where traffic is routed down the shortest path. If we think about it from first principles, it quickly becomes apparent that is not the best approach.

Let’s examine what happens when a consumer connects to Broker2.

  • Broker2 echoes the subscription to the brokers it knows about - Broker1 and Broker3.
  • Broker3 echoes the subscription down all networkConnectors other than the one from which the request came; it subscribes to Broker1.
  • A producer sends messages into Broker1.
  • Broker1 stores and forwards messages to the active subscriptions on its transportConnector; half to Broker2, and half to Broker3.
  • Broker2 stores and forwards to its consumer.
  • Broker3 stores and forwards to Broker2.

Eventually everything ends up at the consumer, but some messages ended up needlessly travelling Broker1->Broker3->Broker2, while the others went by the more direct route Broker1->Broker2. Add more brokers into the mix, and the store-and-forward traffic increases exponentially as messages flow through any number of weird and wonderful routes.

clip_image009

The message flow overview above shows lots of unnecessary store-and-forward.

Fortunately, it is possible to avoid this by employing other topologies, such as hub and spoke.

clip_image010

Better, isn’t it? A message can flow between any of the numbered brokers via the hub and a maximum of 3 hops.

You can also use a more nuanced approach that includes considerations such as unidirectional networkConnectors that pass only a certain subscriptions, or reducing consumer priority such that further consumers have a lower priority than closer ones.

Each network design needs to be considered separately and trades off considerations such message load, amount of hardware at your disposal, latency (number of hops) and reliability. When you understand how all the parts fit and think about the overall topology from first principles, it’s much easier to work through.

Resources

[1] Apache ActiveMQ : http://activemq.apache.org

[2] FuseSource : http://www.fusesource.com

[3] Network of brokers : http://activemq.apache.org/networks-of-brokers.html

No reactions

Integrating Activiti BPM with Mule and Camel

 

Introduction

Today there are many excellent frameworks and components for developing (enterprise) integration solutions. Typical integration components are BPM (Business Process Management), ESB (Enterprise Service Bus) and message brokers. Quite often, these components complement each other, each with its own specific role, as can be seen at the image below.

SOA-ESB-BPM

The role of the BPM component can vary based on your needs or applications requirements. It can perform workflow tasks, business process tasks and orchestration tasks. Although these tasks (and their implementations) may differ, there’s usually one obvious commonality: the need to interact with the other components.

As for the BPM part, there’s a new kid in town and is here to stay. I’m talking about Activiti. This article shows how Activiti plays nicely with other components, such as an ESB (such as Mule) and a message router (like Apache Camel).

About Activiti

Activiti is an open source workflow and Business Process Management (BPM) platform, and currently part of Alfresco, replacing jBPM as its default workflow framework. Its core is a lightweight BPMN 2 process engine for Java. It’s distributed under the Apache license. Activiti runs stand-alone or embedded in any Java (web) application using it’s excellent Spring support.

The Activiti engine provides a powerful Java API that makes it easy to deploy process definitions, implement custom logic and unit test processes. If you want to connect remotely to the Activiti engine there’s a REST API to communicate with the process engine. But what if you want to start a new process instance by sending a JMS message, or invoke a web service from a BPMN 2.0 process? This is a much more real-life scenario.

BPM integration with Web Services

By default, the BPMN 2.0 specification provides support for doing web service calls via a specific web service task. The Activiti engine provides support for a web service task, but it may be a bit cumbersome to implement due to the large amount of additional XML elements needed. And this task does only SOAP web service calls.

Luckily the Activiti community came to the rescue. In the latest releases of Activiti you’ll see two interesting contributed Activiti modules, one for Mule ESB integration and one for Apache Camel integration. If you are interested, just download the latest version and play around, or you can check out the Activiti source and take a look for yourself. For more information on building from source, please read the ‘Building the distribution’ wiki page.

Now let’s walk through some examples to get an overview of these modules.

About Mule

Let’s start with the Mule ESB. Mule has out of the box BPM support, including an implementation for Activiti. This is not really a surprise, knowing that Activiti and MuleSoft work closely together on supporting each other’s products.

Mule integration

With the Activiti Mule ESB integration there are two deployment options:

  • Option one is to run both Mule and Activiti in one Spring container and the communication between Mule ESB and the Activiti Engine uses the Activiti Java API.
  • The second deployment option is to run Mule ESB standalone and let Mule ESB communicate with the Activiti engine through its REST API. The only difference is the Activiti connector, which is defined in the Mule configuration.
  • Below examples of Mule configuration for both deployment options.

    Embedded configuration:

    <mule

       xmlns="http://www.mulesoft.org/schema/mule/core"

       xmlns:activiti="http://www.mulesoft.org/schema/mule/activiti-embedded">

       <activiti:connector

          name="activitiServer"

          repositoryService-ref="repositoryService"

          runtimeService-ref="runtimeService"

          taskService-ref="taskService"

          historyService-ref="historyService" />

    <!– Rest of the code shown in the next snippets –>

    </mule>

     

    Remote configuration:

    <mule

       xmlns="http://www.mulesoft.org/schema/mule/core"

       xmlns:activiti="http://www.mulesoft.org/schema/mule/activiti-remote"

       <activiti:connector

       name="activitiServer"

       activitiServerURL="http://localhost:8080/activiti-rest/service/"

       username="kermit"

       password="kermit" />

    </mule>

    The embedded configuration references Spring beans defined in the Activiti engine Spring configuration for the Mule ESB to communicate with the Activiti engine. The remote configuration defines the location of the REST API and the authentication parameters so Mule can use the Activiti REST API to communicate with the Activiti engine. For more information on Mule configuration, please take a look at the MuleSoft website. Access to the developer documentation requires signing up, but is free.

    Now let’s actually do something with the Activiti connector. Let’s kick off a new process instance of a simple BPMN 2.0 process using a JMS message. First, we have to set up the Activiti connector infrastructure in Mule, which is detailed below.

    Activiti connector configuration:

    <jms:activemq-connector name="jmsConnector" brokerURL="tcp://localhost:61616"/>

    <flow name="CreateActivitiProcessFlow">

      <jms:inbound-endpoint queue="in.create" />

      <logger message="Received message #[payload]" level="INFO" />

      <activiti:create-process parametersExpression="#[payload]" />

      <jms:outbound-endpoint queue="out.create" />

    </flow>

    Note that the "flow" tags represent a Mule message flow. When a message is sent to the ‘in.create’ queue, the message is logged using the #[payload] expression. Next, the Mule ESB Activiti module is invoked to create a new process instance. In this example, the JMS message is expected to be a MapMessage and the Map is retrieved to get the process parameters with the parametersExpression. To be able to start a process instance, we have to specify a ‘processDefinitionKey’ property in the MapMessage.

    The additional properties specified in the MapMessage are all translated to process variables. Finally the process instance gets created and the newly created process instance object is sent to another JMS queue (out.create). This JMS message (which is an ObjectMessage) contains among others the process instance ID that can be used to retrieve information such like process variables etc.

    To test this example we need a bit of JMS plumbing code. If you’re interested in running the code examples yourself, you can check out the code from the Google Code repository, listed at the bottom of this article.

    In addition to creating new process instances, you can also set new process variables, signal a process instance etc. For a full overview of BPM functionalities in Mule, you can read the Mule ESB Activiti BPM documentation.

    Intermezzo

    I already mentioned that Activiti uses the BPMN 2.0 specification for modeling process definitions. This is great, since you won’t be tied to a vendor specific or closed specification.

    Mule does have its own DSL for specifying Mule message flows, but it tries to implement the Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf. In fact, many tools and framework are based on these patterns, like Apache Camel, ServiceMix/Fuse and Spring Integration (just to name a few).

    These patterns allow us to graphically define message flows. For more information on Enterprise Integration Patterns, take a look at http://www.eaipatterns.com. In this article, you will find some examples explained using the patterns.

    Communication

    In addition to communicating with the Activiti engine from the Mule ESB, it’s also possible to send messages from a BPMN process to the Mule ESB. This opens up possibilities to send for example JMS messages, or create advanced integration logic from a BPMN process. The current implementation is limited to the embedded mode for this piece of functionality, but there’s no reason why this can’t be expanded to also supporting the standalone or remote setup. Let’s look at a simple Activiti process definition, containing a Mule send task.

    <definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"

      xmlns:activiti="http://activiti.org/bpmn"

      targetNamespace="http://www.activiti.org">

      <process id="helloMuleWorld">

        <startEvent id="Start" />

        <sequenceFlow sourceRef="Start" targetRef="sendToMuleTask" />

        <sendTask id="sendToMuleTask" activiti:type="mule">

          <extensionElements>

           <activiti:field name="endpointUrl">

              <activiti:string>vm://in</activiti:string>

           </activiti:field>

           <activiti:field name="language">

              <activiti:string>juel</activiti:string>

           </activiti:field>

           <activiti:field name="payloadExpression">

              <activiti:expression>${processVariable1}

           </activiti:field>

           <activiti:field name="resultVariable">

             <activiti:string>processVariable2</activiti:string>

           </activiti:field>

          </extensionElements>

        </sendTask>

        <sequenceFlow sourceRef="sendToMuleTask" targetRef="End" />

        <endEvent id="End" />

      </process>

    </definitions>

    In this example we send a message to the in queue of the JVM transport in Mule (which is a JVM messaging component). The message contains the value of the processVariable1 process variable and the response (we use a request-response exchange pattern in the Mule flow configuration) is written to a new process variable named processVariable2.

    The Mule message flow configuration listening to the JVM queue looks like this.

    <flow name="HelloMuleWorldFlow">

      <vm:inbound-endpoint path="in" exchange-pattern="request-response" />

      <logger message="Received message #[payload]" level="INFO" />

      <script:transformer>

        <script:script engine="groovy">return ‘world’

    </script:transformer>

    </flow>

    The message is logged and a simple Groovy script returns a response message with the content ‘world’. This shows how easy it is to send a message from a BPMN process to the Mule ESB.

    Now let’s take a look at the Apache Camel implementation.

    About Camel

    Apache Camel is a lightweight integration framework which implements all Enterprise Integration Patterns. Thus, you can easily integrate different applications using the required patterns. You can use Java, Spring XML, Scala or Groovy. Besides that, own custom components can be created very easily.

    You can deploy Apache Camel as standalone application, in a web container (e.g. Tomcat or Jetty), in a JEE application Server (e.g. JBoss AS or WebSphere AS), in an OSGi environment or in combination with a Spring container.

    Camel integration

    As mentioned before, there is another great and widely used integration framework available in Activiti to be used: Apache Camel. You should understand that both Mule ESB and Apache Camel are capable of doing lots of similar integration logic.

    Of course, there are plenty of differences. One of the most important difference is that Mule is usually considered a product, while Camel is much more of a framework. As for Mule, this is real ESB with the usual functionalities. Camel is technically not an ESB, although it provides much of the same integration logic.

    Another difference is that the Camel integration always runs embedded with the Activiti Engine in the same Spring configuration. So you have to define a Spring XML configuration that includes an Activiti Engine config and a Camel context config. To be able to start new process instances from Camel the deployed process definition key is made available in the Camel context as you can see in the following snippet.

    <beans

       xmlns="http://www.springframework.org/schema/beans"

       xmlns:camel="http://camel.apache.org/schema/spring">

      <bean

       id="activemq" 

       class="org.apache.activemq.camel.component.ActiveMQComponent">

        <property name="brokerURL" value="tcp://localhost:61616" />

      </bean>

      <bean id="camel" class="org.activiti.camel.CamelBehaviour">

        <constructor-arg index="0">

          <list>

            <bean class="org.activiti.camel.SimpleContextProvider">

              <constructor-arg index="0" value="helloCamelProcess" />

              <constructor-arg index="1" ref="camelProcess" />

            </bean>

          </list>

        </constructor-arg>

      </bean>

      <camelContext

       id="camelProcess"

       xmlns="http://camel.apache.org/schema/spring">

        <packageScan>

          <package>nl.ciber.knowledgeblog.camel</package>

        </packageScan>

      </camelContext>

    </beans>

    In this configuration we create a connection to an ActiveMQ broker we’ll use later on. Next, a SimpleContextProvider is defined that connects a deployed process definition on the Activiti engine to a Camel context. You can define a list of SimpleContextProviders for each process definition that you want to connect to a Camel context. In the last part a Camel context is defined that scans for RouteBuilder classes in the configured package.

    With the infrastructure in place we can now define integration logic in a Camel RouteBuilder class using fluent API.

    public class CamelHelloRoute extends RouteBuilder {

      @Override

      public void configure() throws Exception {

        from("activemq:in.create")

            .log(LoggingLevel.INFO, "Received message ${body}")

            .to("activiti:helloCamelProcess")

            .log(LoggingLevel.INFO, "Received message ${body}")

            .to("activemq:out.create");

        from("activiti:helloCamelProcess:serviceTask1")

            .log(LoggingLevel.INFO, "Received message on service task ${property.var1}")

            .setProperty("var2").constant("world")

            .setBody().properties();

       }

    }

    There are two Camel routes defined in this RouteBuilder class. The first Camel route listens for new messages arriving at the ‘in.create’ ActiveMQ queue. The message is logged and a new instance of the ‘helloCamelProcess’ process definition is created and the process instance id is logged and sent to the ‘out.create’ queue of ActiveMQ. Now we can send a JMS message to the ‘in.create’ queue and all entries of the map are set as new process variables on the instance of the ‘helloCamelProcess’ process, and the process instance ID is sent to the ‘out.create’ queue.

    In the second route the Java service task logic of the ‘helloCamelProcess’ process is implemented (we’ll see in a bit how this is implemented in BPMN 2.0 XML). First the process variable var1 is logged and then a new process variable var2 is created on the process instance. Of course we can implement far more complex integration logic here, like sending a JMS message or invoking a web service call.

    Now let’s look how the logic of the Java service task (serviceTask1) is delegated to this Camel route.

    <definitions targetnamespace=" http://activiti.org"

        xmlns:activiti="http://activiti.org/bpmn"

        xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL">

      <process id="helloCamelProcess">

        <startevent id="Start">

        <sequenceflow sourceref="start" targetref="serviceTask1">

        <servicetask activiti:delegateexpression="${camel}" id="serviceTask1">

        <sequenceflow sourceref="serviceTask1" targetref="waitState">

        <receivetask id="waitState">

        <sequenceflow sourceref="waitState" targetref="End">

        <endevent id="End">

      </process>

    </definitions>

    As you can see the Camel route delegation is really simple. We only have to reference the CamelBehavior Spring bean (camel) we defined earlier. In the source code you can find a unit test to run the full example.

    Conclusion

    With the availability of both integration modules there is a wide range of integration options that can be leveraged. The BPMN 2.0 specification already supports the web service task, and the Activiti engine adds a powerful Java service task, but now a whole range of ESB transports and enterprise integration patterns are available for you to be used.

    Resources

    Activiti project website : http://www.activiti.org

    BPMN 2.0 specification : http://www.omg.org/spec/BPMN/2.0/

    Mule ESB : http://www.mulesoft.org

    Apache Camel : http://camel.apache.org

    Source code: https://activiti-integration-demo.googlecode.com/svn/trunk/

    1 reaction

    Real Time Saver

    We all have these projects were a bunch of excel sheets are made available to you by the business. ďAll the business logic is in there Have FunĒ . Next you will spend days trying to figure out how stuff works. You can save a lot of time by using an excel plug in ďtraceĒ made by Christopher Teh Boon Sung: http://www.christopherteh.com/trace/.

    It make great graphical overview on how all the cells are connected to each other. This can save you several days!

    Till Next Time

    No reactions

    OBIEE11g Installation on 32 bits XP-pro IV

    This will be a 4 part series:

    Part I: http://knowledge.ciber.nl/weblog/?p=254

    Part II: http://knowledge.ciber.nl/weblog/?p=255

    Part III: http://knowledge.ciber.nl/weblog/?p=256

    Part IV: http://knowledge.ciber.nl/weblog/?p=257 

    Navigate to D:\MW_OBI11G\user_projects\domains\bifoundation_domain\servers\bi_server1\security, check if there is a file called: boot.properties

    image

    If not, make it using a text editor and enter :

    image

    
    

    username=weblogic

    password=Password

    Do the same for D:\MW_OBI11G\user_projects\domains\bifoundation_domain\servers\AdminServer\security

    image

    Reboot your BI-server.

    Letís make a couple off handy start and stop .bat files:

    01 Start Weblogic.bat:

    @echo off

    cls

    d:

    D:\MW_OBI11G\user_projects\domains\bifoundation_domain\bin\startWebLogic.cmd

    Run the script, wait to you see the IPADRESSES:

     image

    02 Start Opmn.bat:

    cls

     

    d:

    D:\MW_OBI11G\instances\instance2\bin\opmnctl startall

    03 Start Bi-server.bat:

    @echo off

     

    cls

    d:

     

    D:\MW_OBI11G\user_projects\domains\bifoundation_domain\bin\startManagedWebLogic.cmd bi_server1

     
    pause

    Run the script:

    image

    Wait to you see the IP-adresses.

    Letís check if everything went OK:

    login to the enterprise manager using the weblogic account:

    http://localhost:7001/em

    image

    image

    This looks OK, letís go to the Console:

    http://localhost:7001/console/

    image

    image

    Click on servers:

    image

    image

    Now for the grant finale, check OBIEE:

    http://localhost:9704/analytics

    image 

    Select the Quickstart Dashboard:

    image

    Click around and have Fun!

    image

    I like to stop the server using scripts to prevent it from going into a recovery mode on the next start:

    96 stop bi-server.bat:

    @echo off

    cls

     

    d:

    D:\MW_OBI11G\user_projects\domains\bifoundation_domain\bin\stopManagedWebLogic.cmd bi_server1

    97 Stop OPNM.bat:

    
    

    @echo off

    cls

     

    d:

    D:\MW_OBI11G\instances\instance1\bin\opmnctl stopall

    98 stop weblogic.bat

    
    

    @echo off

    cls

     

    d:

    D:\MW_OBI11G\user_projects\domains\bifoundation_domain\bin\stopWebLogic.cmd

    Till Next Time

    No reactions

    OBIEE11g Installation on 32 bits XP-pro III

    This is part III of a four part series:

    Part I: http://knowledge.ciber.nl/weblog/?p=254

    Part II: http://knowledge.ciber.nl/weblog/?p=255

    Part III: http://knowledge.ciber.nl/weblog/?p=256

    Part IV: http://knowledge.ciber.nl/weblog/?p=257

    Letís start the main installer from: D:\OBIEE11G1130\Install\bishiphome\Disk1

    image

    image

    Press Next

    image

    Select Simple Install

    image

    Press next

    image

    Enter your Middleware Home Path

    image

    Enter a password for the weblogic user, make a big mental note of the password.

    If you get an

    image

    press unblock

    image

    Enter the credentials

    image

    Enter (optional) your personal Oracle Support credentials

    image

    Press install, have a drink, this might take a while.

    image

    Check the BI Configuration box and let it run, have an other drink.

    image

    There is a big chance the last step failed, donít worry, we will fix that later.

    image

    Save the summary file, it has handy information for later use.

    In the next part we will finish the configuration

    Till Next Time

    No reactions

    OBIEE11g Installation on 32 bits XP-pro II

    This is part II of a four part series:

    Part I: http://knowledge.ciber.nl/weblog/?p=254

    Part II: http://knowledge.ciber.nl/weblog/?p=255

    Part III: http://knowledge.ciber.nl/weblog/?p=256

    Part IV: http://knowledge.ciber.nl/weblog/?p=257

    Letís create the repository using the RCU:

    image

    (Some of the screen is Dutch, but all the button are in the same place)

    image

    Select Create

    image

    Enter your database credentials. Be sure it has SYSDBA privileges.

    image

    Press Ok

    image

    Enter a prefix and check the BI components.

    image

    An other check, press OK

    image

    Enter the schema passwords (Make a big mental Note).

    image

    Map the tablespaces.

    image

    press OK

    image

    Press OK

    image

    You are almost done, press Create

    image

    Press close, you are done.

    Letís start the OBIEE and FM setup from D:\OBIEE11G1130\Install\bishiphome\Disk1

    Till Next Time

    1 reaction

    OBIEE11g Installation on 32 bits XP-pro I

    OBIEE11g is out! Yeah, letís try and install it on a windows 32 bit XP-pro box.

    This will be a 4 part series:

    Part I: http://knowledge.ciber.nl/weblog/?p=254

    Part II: http://knowledge.ciber.nl/weblog/?p=255

    Part III: http://knowledge.ciber.nl/weblog/?p=256

    Part IV: http://knowledge.ciber.nl/weblog/?p=257

    Ok, we have finally downloaded the 4,5 Gb:

    image

    image

    (While you are waiting for the download to finish start reading the manuals: http://download.oracle.com/docs/cd/E14571_01/bi.htm)

    Unzip them in a single folder:

    image

    Copy all the bishiphome subfolders to a single install folder:

    image

    Check if you have access to an db:

    image

    Check if you have a loopback adaptor installed, this provides at least one fixed ip-adress for your system:

    image

    image

    Start the RCU (Repository Creation Utility) found in: D:\OBIEE11G1130\ofm_rcu_win32_11.1.1.3.3_disk1_1of1\rcuHome\BIN

    image

    Till Next Time

    3 reactions

    OBIEE Playing With TopN Part 2

    In part 1 we ended with:

    image

    Read more

    1 reaction

    OBIEE Playing With TopN Part 1

    Basic question Give back the Top10 Customers:

    image

    Read more

    No reactions

    OBIEE Events Calendar

     image

    First of all Kudos to Hitesh for laying the ground work: http://hiteshbiblog.blogspot.com/2010/04/obiee-showing-data-on-calendar.html

    Read more

    No reactions

    Next page »