Monday, January 05, 2015

Shifting IT to scale

While some IT groups are fine tuning their practices, many others are still on the journey to achieve greater efficiency at scale. Not all can make this shift (typically due to size, and budget) and that may be fine for them, but there are the unfortunate few who have not yet realized the need to shift their IT practices. Integration, deployment, and operations have been going through a huge transition, in large part due to business needs to better support more customers globally, provide better availability and response times; all at low costs (how else can you maximize profits?). A simple phrase for this is “cloud enablement”. Yeah, that catch all phrase that hopefully you’ve come to realize you need to be a part of, or risk failing behind. To achieve this, a fully automated deployment pipeline is a necessary component which requires a few things be put in place, namely:
  • Thorough application monitoring
  • A Collaborative culture
  • Developer virtual machines
  • One-click integrations
  • Continuous integration
To support many daily deployments, the development process should revolve around making many small, continuous changes, while keeping risk to a minimum. To be comfortable with at all times, you'll need to adopt a range of tools and practices.

Making Developers Comfortable
One of the best ways to ensure a developer can be comfortable with making any deployment is to ensure each developer has their own full production stack. Every developer should have their own virtual machine (there are free options like Virtual Box, my preferred, KVM and Xen), configured with a configuration management tool such as Puppet, Chef, Ansible, Salt or whatever you use internally, with the same configuration used in production. Ensuring the whole provisioning process is automated is another necessity since this not only optimizes productivity by minimizing creation times, but eliminates human error in typing in multiple commands which may vary according to environment and platforms.

On the continuous integration front, having a tool which allows developers to test their changes without having to commit to the production code repository helps to keep production clean and thus deployable while allowing quick and reliable testing. With the rising increase in popularity of container technologies (looking at you Docker), one can spin up on-demand, isolated and parallelized containers to conduct separate tests. The deployment process then becomes a simple one-click process between environments. A/B or other such zero-downtime type production testing further adds to the comfort level, not just for developers but the business in general.

Just as important to doing continuous delivery is monitoring. KPIs (key performance indicators) should be well known and graphed (I like graphing everything that can graphed as being a visual person I find it tells the story much quicker and is more effective). Most monitoring solutions now provide anomaly pattern detection which can be quite useful vs. eyeballing some numbers or even a graph. Your typically better off with a hybrid log approach where each application/service sends logs to two locations, i.e. it’s own log aggregator service which provides short term storage, and insight into it’s local activities without any external depedencies; and a centralized log aggregator service providing longer term storage with the end-to-end insight of all services and clients.

Achieving a Collaborative Culture
Any highly ambitious or for that matter successful endeavor requires a high degree of collaboration, ongoing collaboration to be exact. Most high performers by their very nature are social and want to talk, to share, and be collaborative. One only needs to look at the success of Twitter, Facebook and other social media platforms to see this truth. Enabling that collaboration with the appropriate solution and practices are the only seeds required to make this desire grow and succeed. A highly collaborative communication style, which I like, is based on IRC with chat rooms or channels for various specific purposes. For example, each team can have it’s own room/channel for private communication, another room/channel for a specific service/application, and yet another for general discussion or  perhaps “war room” (such as #warroom for outage related conversations to coordinate an investigation, discuss counter measures and resolution monitoring). Many such solutions are available in the market offering a full breadth of features such as email and ticket integration, video conferencing, white boarding and so on.

Part of a collaborative culture also involves doing a post-mortem, lessons learned or root cause analysis following an incident. I really like the idea of making this blameless as, I think, this gets things done more effectively. Typically everyone already knows whose at fault (or the team) and assigning blame in a public manner only serves to decrease moral, job satisfaction and actually learning more about what and how something happened. Finger pointing is never productive in my experience.

A final word on on-call
I don’t like being on call, as in I don’t like being called at 3am out of my sleep or having to sit by the phone, I’m sure no one actually does, but it is a necessary process for operations, support and developers. Being on-call not only makes you want to have things working so you don’t get called, but also ensures you stay in touch of the day-to-day issues that are faced. This is especially important when introducing new features or improving existing processes. I like going with a rotation schedule of one-week every four weeks. I think this quite typically and agreeable to most.

Sunday, September 28, 2014

Installing Tomcat 8.0.x on OS X

Prerequisite: Java

On OS X 10.9.x (Mavericks) Java is not installed by default anymore, at least not initially.The easiest way to get Java on your Mac is to open the Terminal app and type ‘java’. You will be asked if you want to install Java and OS X will take care of the rest - you just need to follow the instructions and you’ll end up with Java 7. This involves sending you to the Oracle’s Java SE web page where you will need to select the appropriate JDK (JDK 7u67 as of this writing) for download and installation.

The JDK installation package comes in a dmg and installs easily on a Mac. In the same or different Terminal app window entering:

java -version

Will now show something like this:

java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

Confirming you have successfully installed Java 7.

Installing Tomcat

Now comes the Apache Tomcat installation, which is actually quite easy.

1. Download a binary distribution of the core module (apache-tomcat-8.0.12.tar.gz).

2. Using any available unarchive tool, unarchive the file from the Downloads folder to ‘/usr/local’. You likely will need Administrative privileges for this step. You can unarchive the file contents to another location of our choosing, I chose ‘/usr/local’ since it’s my standard and will centralize Tomcat.

3. To make things easier in future for transparent release updates, create a symbolic link for your location (‘/usr/local/apache-tomcat-8.0.12’) to the library location:

sudo ln -s /usr/local/apache-tomcat-8.0.12 /Library/Tomcat

4. Permissions and mode should be okay (it was for me), but to make sure you can run the below:

sudo chown -R /Library/Tomcat
sudo chmod +x /Library/Tomcat/bin/*.sh

5. Startup your Tomcat instance:


6. Verify things are working by opening a browser window/tab using the default URL (http://localhost:8080) and take a look at the default page.

Everything should be functional and now ready for your customizations and/or deployments.

Wednesday, May 28, 2014

How to resolve "vagrant up" failing with "VBoxManage.exe: error: Code CO_E_SERVER_EXEC_FAILURE"

I'm trying to use a pre-built vagrant box, one built by Mathew Baldwin for WLS12c on CentOS 6.4, and ran into a problem. The vagrant up command failed:

A few things:

  • Running the VirtualBox command "vboxmanage list hostonlyifs" separately is fine.
  • I'm not running the command or session as an Administrator, in fact that fails with a completely different error given that's a separate account and does not have the required vagrant box.
  • I'm using the awesome Console2 (by Marko Bozikovic) but that matters not since the same error occurs in plain old cmd.exe
  • The command does complete successfully in MobaXterm 7.1 (another awesome tool!), though later down during the processing of the box (during Puppet configuration I believe) my machine does a hard shutdown/crash
  • VirtualBox version is 4.3.12
  • Vagrant version is 1.6.2
  • Windows 7 Pro SP1, 64-bit 

A Google search revealed others have had this problem, but not too many actual solutions, or at least one that worked for me. Here's how I got past this problem outside of using my MobaXterm (given that had shutdown my machine each time previously).

The fix was to start VirtualBox before running vagrant up. Even still, there remains some instability due to what seems to me like timing issues with vagrant sending commands to VirtualBox as on occasion the process fails (i.e. times out) waiting for the VM to start up and show the login prompt.

Tuesday, October 02, 2012

Oracle OpenWorld 2012: Monday

Database 12c Features

Following yesterday's big announcements and new rumors I made a few schedule changes ensuring I allotted more time in the demo grounds to talk to various Oracle specialists concerning Database 12c. The demo grounds are truly amazing with a wealth of contacts to be made and things to be learnt from talking to the various "informed" vendors (including Oracle). Even more amazing are the large number of people that come solely for the purpose of winning free stuff - not that I'd complain if I won one of the numerous iPads on offer or even better the $10,000 offered by EMC (good to know where my companies money is going). I'm curious if any of these vendors do any analysis on the "real" contacts made vs. just those looking for stuff and follow-up sales made.

There are many new and usable features with 12c, and I would argue this will be the biggest release Oracle will have done to date (when it ships) in terms of changes and features. The upgrade process via DBUA has been given some attention with parallelism during the upgrade itself, fix-it scripts, resumption from some failures (instead of starting from scratch for everything) and a post-upgrade health check. Transportable Tablespaces (TTS) via Data Pump will be more efficient by automatically running all pre-requisite checks, and doing the full export and import. Meaning it figures out all the metadata dependencies, creates all the users, objects, grants, etc. from the source unto the target then copies across all the data and viola!

Some questions (but not all) surrounding Pluggable Databases (PDB), which I've mentioned yesterday, were  answered today as well. It will support pre-12.1 databases (only is my guess and based on the slides used) which will plug into a 12.1 container database (the housing or hypervisor database if you will). All databases can be backed up as one and recovered separately including point-in-time (PIT). PDB can also be run in standby setups though I'm still left wondering how PDB works exactly in a RAC environment. Migration into this architecture appears to be done via Data Pump (I'm guessing TTS since otherwise it would be long migration). Resource utilization is handled by Resource Manager (DBRM) so processor (and other settings?) usage can be allocated to each database. Patching/upgrades can be done separately to each database though I'd imagine the container DB must always be at the highest release (similar to Grid Infrastructure). A question, out of many I have, is how does affect Oracle VPD and will it be a paid feature? (we all know the answer is yes)

Another very interesting and immediately usable feature is "Automatic Usage Based Compression". Essentially a heat map of table partitions is used to compress various partitions based on usage/activity (INSERT, UPDATE and DELETE statements) using some user defined policies. Compression is done online, in the background. Does this mean HCC is open to all now? Is this using DBMS_REDEFINITION under the covers for the online compression change? What about compressing table blocks and not just partitions? Will the threshold for hot, warm and cold be adjustable (there is always some hidden parameter)? Is this part of the compression package/option and how much will it cost?

Redaction of Sensitive Data is another big feature. This moves the masking of sensitive data from the application level to the database level where the DBA does this change online and immediately (no logoff/logon required) based on set policies. I'm left wondering how this affects the Data Masking Pack and Label Security? (and again, how much?)

A new feature (which is actually available now supported for Exadata) is RMAN Cross Platform  Incremental Backups. This is using RMAN to do a platform conversion from a big endian platform such as Solaris SPARC, IBM AIX and HP-UX to Linux using backups/restores where the incrementals can be applied to the target until ready to switch platforms at which point the actual switch over is considerably less time (and effort). Note 1389592.1 explains this with greater details.

The Jimmy Cliff Experience

Whomever thought up the Oracle Music Festival deserves a raise, and even better, the party responsible for bringing in Jimmy Cliff deserves a promotion! The man, Jimmy Cliff, must be in his 60's at least but has more energy than a 20 year old! He completely rocked the house with his energy, song arrangements and charisma. The crowd (including myself) was completely involved and so much so enjoyed his performance that an encore was demanded, and graciously accepted.

I had another big day ahead of me and so left during his encore. I can already feel the pain in various body parts following my "dancing" (I use the term loosely). I good end to another great day at OpenWorld...

Monday, October 01, 2012

Oracle OpenWorld 2012: Sunday (continued...)

Sessions and Keynote

I attended a few sessions which for the most part were informative in some way. One take-away was that Oracle Enterprise Manager 12c Cloud Control is very popular as it is on a lot of minds (mine including since I'm trying to get my company to bypass our implementation of 11g and go straight to 12c but I digress). The few sessions I attended had quite a lot of questions. I was also able to meet a few of my Twitter contacts (@dbakevlar @aakela and @fuadar) which was awesome!

An interesting nugget in the session "Will it blend? Verifying Capacity in Server and Database Consolidations" was that of 'Consolidated DB Replay'. This is feature introduced in a patch (13947480) for which (as the name suggestions) allows for the concurrent running or replay of multiple captured workloads. The use case for this, of course, is capturing workloads from multiple source database systems and replaying on a single target system which is intended as a consolidation database. Ideally the capture time and periods across the multiple source databases should be the same to get the best picture of what the consolidated workload would look like on a single consolidate database. This feature would replace (or minimize) manual efforts involving visually analyzing workload graphs using  OEM (as an example) for each database or looking at consolidated/merged AWR information for the multiple source systems.


There have been many rumors surrounding a new Exadata 1/8 rack configuration, Exadata hardware upgrades (would there be an X3-2) and database 12c (if it would be announced or not). I did not attend Enkitec Extreme Exadata Expo (E4) 2012 but did read some of the Tweets and postings concerning the sessions which did point to such announcements. As it turned out, the Larry's Keynote did not disappoint and confirmed the rumors with the announcements of Exadata X3-2 (including an Exadata X3-2 1/8 rack), Oracle Database 12c release sometime in 2013 (my guess is some features will be cut for a January/February released in 2013), Oracle Private Cloud and IaaS.

Oracle Private Cloud is an offering for companies needing their own private infrastructure which can either run externally at Oracle facilities or inside the companies own data center but managed completely by Oracle. Having experienced various Oracle Support Services (OCS, OCMS or Oracle On-Demand) I can say the success for this offering will depend heavy on improvement for these support offerings and clear understanding between all involved parties as to what is meant by "managed". Oracle Cloud is Oracle's Infrastructure as a Service or IaaS offering composed of Exadata, Exalogic, Oracle Linux, Oracle VM, Oracle Storage, and InfiniBand (IB) components.

Oracle Database 12c which will be released sometime in 2013 (January/February is the whisper) will have some interesting features (not sure if I disclose at this time other than what has just been announced) such as Pluggable Databases. This is essentially multiple databases sharing the same server using containerization at the database level therefore being more efficient (so not complete separation in terms of processes and memory) and not requiring any application changes. For those familiar with SQL Server, PostgreSQL (including Netezza) and other such database platforms this is not anything knew. It is however, in the context of Oracle databases and has several benefits in the area of consolidation and hosting.

Exadata X3-2 was announced as the hardware refresh for the previous generation X2-2 along with a new 1/8 rack deployment option (starting price for negotiations is $200,000, nice!). Strangely enough, there is also an Exadata X3-8 as the refresh for Exadata X2-8, but this got no recognition (perhaps these do not sell as well and are a niche offering?). A few quick overview specifications are below:

Database Nodes

  • up to 8 x Oracle/Sun X3-2 servers
  • up to 2 TB RAM or 256 GB/node
  • up to 128 cores using 2x8-core Intel E5-2690 (2.9 GHz) per node

Storage Nodes

  • Up to 14 x Sun X3-2L
  • Up to 168 cores using 2x6-core Intel E5-2600 series per node
  • Up to 22 TB Flash memory
  • Up to 168 x 600 GB 15K rpm HP or 168 x 3 TB 7.2K rpm HC HDD

In terms of performance a full rack X3-2 should scream with:

  • ~50K IOPS using 8K IO requests (most vendors use 2K so be careful doing comparisons)
  • 100 GB/s bandwidth taking into consideration HP HDD and Flash
  • 16 TB/hour data loads (from past experience this is within the same array so again, be careful and ask specific questions).

A software upgrade to the platform (this means available now w/o upgrading to X3-2) brings Cached Writes along with the previous Cached Reads. So maximum IOPS for 8K IO requests involving Flash is ~1,500,000 for read and ~1,000,000 for write. With compression (your mileage will vary according to your data) numbers should be better but again, test, test and test again. Your workload was not used when obtaining these benchmark figures. Usable disk capacity is ~45 TB for HP and ~224 TB for HC HDD. Also, Oracle Cloud and Private Cloud will start with Exadata X3-2 systems.

For me, here is what is missing from Exadata or what I'd like to see:

  • A more appliance-centric approach where even the ASM and DB configuration is standard and factory setup (sorry, but OCS involvement/engagement would be minimized)
  • More work being pushed down to the storage level (more analytics, more parallel processing, more "transparent" indexing so I don't have to create and maintain)
  • Automatic data compression (can still provide better/advanced compression levels at cost)
  • Built-in Hadoop integration (storage nodes as data nodes and a dynamic compute as the named node?)
  • Integrated monitoring via included OEM appliance as either an included 2x1U server in the rack or external servers. You can argue you can just use existing or build your own but wouldn't it be nice to have this option? Quite frankly I'm puzzled as to why Oracle has not come out with an OEM appliance yet and have suggested as such to some powers in the OEM team (also wondering about a MySQL engineered system, ExaSQL :-))

Fujitsu Keynote
Moving on (or back) to the Fujitsu portion of the keynote, I found it most interesting since they are doing very similar work with their "Fujitsu Agricultural Cloud Service" to that of my company (though not quite as full-featured a service if I do so say). The services gathers data collected by farmers via various devices and runs analysis which will aid in providing information to improve yields. This is being done very cost effectively and near real time. Then there is project "Athena" which is the merging of hardware and software (OS and database if I understood correctly) to bring forth a new processing model which will far surpass anything currently available. Leveraging knowledge and technology from the K supercomputer, Liquid Loop Cooling (LLC), 512 GB per socket (32 TB per system), 4 CPUs w/2TB each scaling/connecting in a building block fashion (up to 16 blocks?) and software on chip (database software also in silicon) the SPARC64 X was/will be born in 2013. Testing has shown a 2x increase in performance over IBM Power7 though no specifics were given (it was just a keynote). I do love  how they showed real world type scenarios and business usage instead of just pure tech.

So far that has been my OpenWorld 2012 experience to date. Sunday down, next up Monday to Thursday.

Sunday, September 30, 2012

Oracle OpenWorld 2012: Sunday

MySQL Connect

First up today was a visit to the MySQL Keynote featuring some speakers from Twitter, Paypal and Verizon Wireless. I've been interested in MySQL for sometime but never really played with it for myself. My company is seeking to investigate it further since we've got over 50 instances of it running, but also to look into open source alternatives (cost reduction mainly).

Very interesting what these large companies are doing with MySQL and how they are using the technology. Twitter uses it extensively because:

  • It is fast, even compared with NoSQL alternatives (depending on what you are doing with it of course)
  • It has very low latency
  • It scales well
  • It has a large ecosystem
  • The safety of InnoDB (i.e. it does not lose data)

It is not all roses however, as there were some words of caution::

  • It is not yet optimized for SSD (which they are working on)
  • There is room for a configuration management tool
  • It is not a complete solution and should be used as a building block
  • It is not a purpose-built key-value (KV) store (so try other NoSQL if that is your true requirement)
  • It is not schema-less (both pro and con)
  • There is a need for better performance/response time metrics
  • There is a need for better monitoring

What they could share of there environment is also very interesting:

  • 25 traditional master-slaves
  • 3-100 machines with 6 DBAs and 1 developer
  • > 6 million queries per second (qps) w/400 million tweets per day (+ metadata)

The talk from Verizon Wireless was also  interesting  as they apparently use MySQL for their intranet with a customized landing page for each user (based on location, function, etc.) which the user can further customize. It is all highly performant (since that was a main criteria) and scalably. Their "Verizon Infrastructure as a Service (IaaS) Group" was inspirational! An internal group which provides IaaS to the rest of the company.

Tuesday, May 01, 2012

Oracle Internet Directory (OID) 11g: Part IV - OID Installation

This is the final post in my series on OID11g. I'll try and follow-up with a few other posts but essentially from here on out you would be ready to go with OID11g. If you are interested in making your OID highly available using LDAP multi-master replication then stay tuned for that follow-up post.

So OID11g  ( installation actually consists of three phases, namely installation, patching and configuration. That is how I've broken up this post which as a side effect I think, makes it easier to follow. To provide some further clarity, some Fusion Middleware components are offered as full installers, but not all. You can get the distribution details for the components on MOS, or via the documentation on on OTN. Unfortunately, OID falls into the case requiring a software installation of, followed by patching to and subsequent configuration to complete the "installation". Hopefully Oracle will move towards full installers for all products much like they've done for the database (and other products such as GoldenGate and so on).

Installation of

1. Edit your response file for silent installation. The items of interest are highlighted as shown below:

Response File Version=


#Set this to true if installation and configuration need to be done, all other required variables need to be provided. Variable "INSTALL AND CONFIGURE LATER TYPE" must be set to false if this is set to true as the variables are mutually exclusive

#Set this to true if only Software only installation need to be done. If this is set to true then variable "INSTALL AND CONFIGURE TYPE" must be set to false, since the variables are mutually exclusive.

#Write the name of the Oracle Home directory. The Oracle Home directory name may only contain alphanumeric , hyphen (-) , dot (.) and underscore (_) characters, and it must begin with an alphanumeric character.

#Write the complete path to a valid Middleware Home.

#Provide the My Oracle Support Username. If you wish to ignore Oracle Configuration Manager configuration provide empty string for user name.

#Provide the My Oracle Support Password

#Set this to true if you wish to decline the security updates. Setting this to true and providing empty string for My Oracle Support username will ignore the Oracle Configuration Manager configuration

#Set this to true if My Oracle Support Password is specified

#Provide the Proxy Host

#Provide the Proxy Port

#Provide the Proxy Username

#Provide the Proxy Password




2. Run the installation using OUI for OID, as the oracle user:

./runInstaller -silent -response /oracle/stage/rsp/oid11g-inst.rsp

Below is a sample execution run:

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 80 MB. Actual 18983 MB Passed
Checking swap space: must be greater than 500 MB. Actual 7724 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-01-24_04-05-56PM. Please wait ...[oracle@orads02 Disk1]$ Log: /u01/app/oraInventory/logs/install2012-01-24_04-05-56PM.log
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Reading response file..
Expected result: One of enterprise-5.4,enterprise-4,enterprise-5,redhat-5.4,redhat-4,redhat-5,SuSE-10
Actual Result: redhat-5
Check complete. The overall result of this check is: Passed

CertifiedVersions Check: Success.
Checking for binutils-; found binutils- Passed
Checking for compat-libstdc++-33-3.2.3-x86_64; found compat-libstdc++-33-3.2.3-61-x86_64. Passed
Checking for compat-libstdc++-33-3.2.3-i386; found compat-libstdc++-33-3.2.3-61-i386. Passed
Checking for elfutils-libelf-0.125; found elfutils-libelf-0.137-3.el5-i386. Passed
Checking for elfutils-libelf-devel-0.125; found elfutils-libelf-devel-0.137-3.el5-x86_64. Passed
Checking for gcc-4.1.1; found gcc-4.1.2-50.el5-x86_64. Passed
Checking for gcc-c++-4.1.1; found gcc-c++-4.1.2-50.el5-x86_64. Passed
Checking for glibc-2.5-12-x86_64; found glibc-2.5-58.el5_6.3-x86_64. Passed
Checking for glibc-2.5-12-i686; found glibc-2.5-58.el5_6.3-i686. Passed
Checking for glibc-common-2.5; found glibc-common-2.5-58.el5_6.3-x86_64. Passed
Checking for glibc-devel-2.5-x86_64; found glibc-devel-2.5-58.el5_6.3-x86_64. Passed
Checking for glibc-devel-2.5-12-i386; found glibc-devel-2.5-58.el5_6.3-i386. Passed
Checking for libaio-0.3.106-x86_64; found libaio-0.3.106-5-x86_64. Passed
Checking for libaio-0.3.106-i386; found libaio-0.3.106-5-i386. Passed
Checking for libaio-devel-0.3.106; found libaio-devel-0.3.106-5-i386. Passed
Checking for libgcc-4.1.1-x86_64; found libgcc-4.1.2-50.el5-x86_64. Passed
Checking for libgcc-4.1.1-i386; found libgcc-4.1.2-50.el5-i386. Passed
Checking for libstdc++-4.1.1-x86_64; found libstdc++-4.1.2-50.el5-x86_64. Passed
Checking for libstdc++-4.1.1-i386; found libstdc++-4.1.2-50.el5-i386. Passed
Checking for libstdc++-devel-4.1.1; found libstdc++-devel-4.1.2-50.el5-x86_64. Passed
Checking for make-3.81; found make-1:3.81-3.el5-x86_64. Passed
Checking for sysstat-7.0.0; found sysstat-7.0.2-3.el5_5.1-x86_64. Passed

Check complete. The overall result of this check is: Passed
Packages Check: Success.
Checking for VERSION=2.6.18; found VERSION=2.6.18-238.12.1.el5. Passed
Checking for hardnofiles=4096; found hardnofiles=131072. Passed
Checking for softnofiles=4096; found softnofiles=131072. Passed
Check complete. The overall result of this check is: Passed
Kernel Check: Success.
Expected result: ATLEAST=2.5-12
Actual Result: 2.5-58.el5_6.3
Check complete. The overall result of this check is: Passed
GLIBC Check: Success.
Expected result: 1024MB
Actual Result: 3948MB
Check complete. The overall result of this check is: Passed
TotalMemory Check: Success.
Expected result: LD_ASSUME_KERNEL environment variable should not be set in the environment.
Actual Result: Variable Not set.
Check complete. The overall result of this check is: Passed
Check Env Variable Check: Success.
Verifying data......
Copying Files...

Applying Oneoff Patch...
The installation of Oracle AS Common Toplevel Component, Oracle Identity Management 11g completed successfully.

Patching to

1. Edit your response file for silent patching. It's not much different from the installation, the items of interest are highlighted as shown below:


Response File Version=


#Provide the Oracle Home location. The location has to be the immediate child under the specified Middleware Home location. The Oracle Home directory name may only contain alphanumeric , hyphen (-) , dot (.) and underscore (_) characters, and it must begin with an alphanumeric character. The total length has to be less than or equal to 128 characters. The location has to be an empty directory or a valid IDM Oracle Home.

#Provide existing Middleware Home location.

#Provide the My Oracle Support Username. If you wish to ignore Oracle Configuration Manager configuration provide empty string for user name.

#Provide the My Oracle Support Password

#Set this to true if you wish to decline the security updates. Setting this to true and providing empty string for My Oracle Support username will ignore the Oracle Configuration Manager configuration

#Set this to true if My Oracle Support Password is specified

#Provide the Proxy Host

#Provide the Proxy Port

#Provide the Proxy Username

#Provide the Proxy Password

#Type String (URL format) Indicates the OCM Repeater URL which should be of the format [scheme[Http/Https]]://[repeater host]:[repeater port]





2. Run the patch application using OUI for OID, as the oracle user:

./runInstaller -silent -response /oracle/stage/rsp/oid11g-patch.rsp

Below is a sample execution run:

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 80 MB. Actual 18983 MB Passed
Checking swap space: must be greater than 512 MB. Actual 7406 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-01-24_04-27-11PM. Please wait ...[oracle@orads02 Disk1]$ Log: /u01/app/oraInventory/logs/install2012-01-24_04-27-11PM.log
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Reading response file..
Verifying data......
Copying Files...

Applying Oneoff Patch...
The installation of Oracle AS Common Toplevel Component on Oracle AS Common Toplevel Component home ,Oracle Identity Management 11g Patchset on Oracle Identity Management 11g home completed successfully.


Configuring OID with ODIP, ODSM and Fusion Middleware Control in a new WebLogic Domain

At this point you now need to configure your installation of OID11g. I went with the option of configuring OID with ODIP, ODSM and Fusion Middleware Control in a new WebLogic Domain. I wanted ODIP as an option to connect and synchronize to AD, ODSM and Fusion Middleware Control (FMC) for the GUI management and monitoring, and a new WebLogic Domain (for ODSM and FMC) since I don't have one that I would like to use currently. Please check the documentation for configuration using other options.

The steps to conduct the configuration are below. Note that I've not had any success doing a silent command line installation and as such the GUI method is what is shown. I suspect this is the only option thus far unless I am missing something (not unlikely) though I have attempted many options.

1. Start the configuration as the oracle user by running '$ORACLE_HOME/bin/':


Click 'Next' to continue to the next screen...

2. Enter the credentials for the new domain's user, along with the domain name. Click on 'Next' to continue.


3. Confirm and/or correct the locations for the WebLogic Server and Oracle Instance directories as well as specify an Oracle Instance Name. When completed click 'Next' to continue.


4. The next screen concerns the usual security notifications. I do not care for security updates so I simply continued.


5. Select Oracle Internet Directory and Oracle Directory Integration Platform. The Oracle Directory Services Manager and Fusion Middleware Control management components are automatically selected for this installation. Ensure no other components are selected and click 'Next' when completed to continue.


6. Select Auto Port Configuration to allow the installer to configure ports from a predetermined range. Click 'Next' when completed to continue.


7. We already used RCU to create and configure the OID schema so here we just need to select 'Use Existing Schema', enter the connection details to the repository database in the form '::' and enter the ODS schema password. Click 'Next' when completed to continue.


8. Next up is the OID information, i.e. the realm and administrator ('orcladmin') credentials. Click 'Next' to continue to the installation summary when completed.


9. Following the installation summary you will see the configuration progress screen.



10. If all goes well you will see the Installation Completion screen


Installation Verification

To verify a successful installation you should run the following commands:

1. Execute '$ORACLE_INSTANCE/bin/opmnctl status -l'

Processes in Instance: asinst_1
ias-component                    | process-type       |     pid | status   |        uid |  memused |    uptime | ports
oid1                             | oidldapd           |    8245 | Alive    | 1068702846 |   375296 |  67:57:58 | N/A
oid1                             | oidldapd           |    8229 | Alive    | 1068702845 |    95868 |  67:57:58 | N/A
oid1                             | oidmon             |    8214 | Alive    | 1068702844 |    83744 |  67:57:58 | LDAPS:3131,LDAP:3060
EMAGENT                          | EMAGENT            |    7402 | Alive    | 1068702843 |    63908 |  68:01:31 | N/A

2. Execute the '$ORACLE_HOME/bin/ldapbind' command on the Oracle Internet Directory for non-SSL and SSL ports. Note that ORACLE_HOME must be set correctly (i.e. not the DB_HOME).

On Non-SSL ports:

$ORACLE_HOME/bin/ldapbind -h -p -D cn=orcladmin -w

On SSL ports:

$ORACLE_HOME/bin/ldapbind -h -p -D cn=orcladmin -w -U 1

Enabling WebLogic Startup

Every time an Administrator wants to run the WebLogic startup script, he/she is prompted with username and password. If the administrator wants to be configure weblogic to startup on bootup or reboot, then they will need the username and password to be automatically recognized. To enable WLS startup without password prompting create $DOMAIN_HOME/servers/AdminServer/security/ and $DOMAIN_HOME/servers/wls_ods1/security/ files with entries:


After the initial startup, the password will be encrypted.


So now you have your first OID instance up and functional. All that is left is some configuration and tuning after some period of being operational. I will end the series on OID11g here but will try and follow-up with some further entries on setting up LDAP multi-master replication (MMR), backup/recovery and migration from 10g. I would like to point out that you should enable  anonymous binds which are disabled by default. Otherwise, you will receive the error:

"Configuration exception: Could not check for the Oracle Schema: TNS-04409: Directory Service Error"

When attempting to use DBCA to add your database to OID. This can be done in two ways:

Using OEM11g Fusion Middleware Control
a. Navigate to "Identity and Access' -> oid1
b. Click on 'Oracle Internet Directory' and select 'Administration' -> 'Server Properties'
c. Switch 'Anonymous Bind' from 'Disallow except for Read Access on the root DSE' to 'Allows'
d. Click 'Apply'

Using Command-line
ldapmodify -D cn=orcladmin -q -p 3060 -h -f [ldifFile]

LDIF File:
dn: cn=oid1,cn=osdldapd,cn=subconfigsubentry
changetype: modify
replace: orclAnonymousBindsFlag
orclAnonymousBindsFlag: 1