Live Upgrade(LU) is a great feature in Solaris, and it’s a good option can test a new environment before release it for production. But i think LU has some issues, like any software…
As far as i know, the LU depends on “biosdev” utility, and that is a weakness in the LU procedure. I did try to LU a machine that boot from SAN (HBA), and that was a problem. Here is the output from the “biosdev” utility:

0x80 /pci@0,0/pci8086,27d0@1c/pci8086,32c@0/pci1077,100@2 \\
/fp@0,0/disk@w5006048449af62a7,23
0x81 /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
0x82 /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0

That’s correct, and is already a victory! But, i have another disc “mapped” for this machine, and i want to use it to LU the server. But, the “lucreate” command gives me an error because that disc is not “reported” by the BIOS as a boot disc. That is correct too, because the qlogic setup utility permit configure just “one” LUN as boot device. So, “i need to put” the other disc in that list, so i can LU for that disc. (this trick i did find here)

# mv /sbin/biosdev /sbin/biosdev-old && cat > /sbin/biosdev

cat << EOF
0x80 /pci@0,0/pci8086,27d0@1c/pci8086,32c@0/pci1077,100@2 \\
/fp@0,0/disk@w5006048449af62a7,23
0x81 /pci@0,0/pci8086,27d0@1c/pci8086,32c@0/pci1077,100@2 \\
/fp@0,0/disk@w5006048449af62a7,8
0x82 /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
0x83 /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
EOF

After that i could issue the "lucreate" command:

# lucreate -c 'sol10u3' -n 'sol10u4' -m /:/dev/dsk/ \\
c0t5006048449AF62A7d8s0:ufs

Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device  is not a root device for any boot environment.
PBE configuration successful: PBE name  PBE Boot Device .
Comparing source boot environment  file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices

Updating system configuration files.
The device  is not a root device for any boot environment.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Checking for GRUB menu on boot environment .
The boot environment  does not contain the GRUB menu.
Creating file systems on boot environment .
Creating  file system for  on .
Mounting file systems for boot environment .
Calculating required sizes of file systems for boot environment .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point .
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment .
Creating compare database for file system .
Updating compare databases on boot environment .
Making boot environment  bootable.
Updating bootenv.rc on ABE .
Generating partition and slice information for ABE 
Population of boot environment  successful.
Creation of boot environment  successful.

And finally upgrade:

# luupgrade -u -n sol10u4 -s /cdrom/sol_10_807_x86/

Install media is CD/DVD. .
Waiting for CD/DVD media  ...
Copying failsafe multiboot from media.
Uncompressing miniroot
Creating miniroot device
miniroot filesystem is 
Mounting miniroot at 
Validating the contents of the media .
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains  version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment .
Package information successfully updated on boot environment .
Adding operating system patches to the BE .
The operating system patch installation is complete.
ABE boot partition backing deleted.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file  on boot
environment  contains a log of the upgrade operation.
INFORMATION: The file  on boot
environment  contains a log of cleanup operations required.
WARNING: <1> packages failed to install properly on boot environment .
INFORMATION: The file  on
boot environment  contains a list of packages that failed to
upgrade or install properly.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment . Before you activate boot
environment , determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment  is partially complete.
Installing failsafe
Failsafe install is complete.

Let's take a look on the status:

# lustatus
Boot Environment         Is       Active  Active      Can    Copy
Name                     Complete  Now   On Reboot Delete Status
-------------------- -------- ------ ---------  ------ ----------
sol10u3                       yes      yes     yes         no       -
sol10u4                       yes      no      no          yes      -

The last step in LU procedure is activate the new environment to boot:

# luactivate sol10u4

WARNING: <1> packages failed to install properly on boot environment .
INFORMATION:  on boot
environment  contains a list of packages that failed to upgrade
or install properly. Review the file before you reboot the system to
determine if any additional system maintenance is required.

Generating partition and slice information for ABE 
Boot menu exists.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Do *not* change *hard* disk order in the BIOS.

2. Boot from the Solaris Install CD or Network and bring the system to
Single User mode.

3. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

     mount -Fufs /dev/dsk/c0t5006048449AF62A7d35s0 /mnt

4. Run  utility with out any arguments from the Parent boot
environment root slice, as shown below:

     /mnt/sbin/luactivate

5. luactivate, activates the previous working boot environment and
indicates the result.

6. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
GRUB menu is on device: .
Filesystem type for menu device: .
Activation of boot environment  successful.

WARNING: In a normal LU procedure, we don't need to do the following actions. Because the boot disc is always the same, is just added a new grub menu entry to boot from the new disc.
If you look closely the above messages, you will see that the grub configuration was made to boot from a disc that is not on the correct order. That is because we did a fake configuration, and the server BIOS is booting from the old disc, with a grub option to boot from a disc in the wrong address (0x81). In the real scenario, the new disc, with the sol10u4 environment, will be configured as the boot device in BIOS, address (0x80). So, we need to install a boot sector on that disc, and configure the /boot/grub/menu.lst file:

/sbin/installgrub /boot/grub/stage1 /boot/grub/stage2
/dev/rdsk/c0t5006048449AF62A7d8s0

After that, we need to mount the new disc (/dev/dsk/c0t5006048449AF62A7d8s0), and create a menu.lst file on it, with a entry like:

#----- sol10u4 - ADDED BY LIVE UPGRADE - DO NOT EDIT  -----

title sol10u4
root (hd0,0,a)
kernel /platform/i86pc/multiboot
module /platform/i86pc/boot_archive

title sol10u4 failsafe
root (hd0,0,a)
kernel /boot/multiboot kernel/unix -s
module /boot/x86.miniroot-safe

#----- sol10u4 -------------- END LIVE UPGRADE ------------

Now we can configure the BIOS to boot from the other disc, and we are done.
that's all.