Known Issues for WebLogic Management
These known issues have been identified in WebLogic Management.
Pre-General Availability: 2024-10-11
The following legal notice applies to Oracle pre-GA releases. For copyright and other applicable notices, see Oracle Legal Notices.
Pre-General Availability Draft Documentation Notice
This documentation is in pre-General Availability status and is intended for demonstration and preliminary use only. It may not be specific to the hardware on which you are using the software. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to this documentation and will not be responsible for any loss, costs, or damages incurred due to the use of this documentation.
- Patching
-
- API operation returns different 409 conflict error messages
- Server state doesn't change during a domain restart
- Patching tries to use Node Manager when start and stop operations use scripts
- Domain sharing middleware displays Needs Attention when other domain is patched
- Patch rollback fails with conflicting patches error
- Patching and lifecycle operations fail when both WebLogic and Node Manager are configured to use OCI Secrets
API operation returns different 409 conflict error messages
- Details
-
When an API that returns a
409 - Conflict
error message for validation, for example, when installing the latest patches:"data": { "code": "Conflict", "message": "Patch readiness status must be OK for domain <domain-ocid> Current status is ERROR" }, "status": "409 Conflict"
Rerunning the operation results in a different error:
"data": { "code": "Conflict", "message": "Resource <domain-ocid> is currently being modified." } "status": "409 Conflict"
- Workaround
-
Wait 15 to 30 seconds before making the second call to the API.
Server state doesn't change during a domain restart
- Details
-
When you restart a domain, the server states should change from Stopping to Restarting to Running. However, WebLogic Management plugin does not report intermediate server states. This means the server state displays Running throughout the restart operation.
- Workaround
-
Monitor the server status in the work request for the domain restart operation.
Patching tries to use Node Manager when start and stop operations use scripts
- Details
-
If the Use scripts to start and stop servers option is selected in the Domain Settings , when you try to apply the latest patches WebLogic Management tries to use Node Manager for the stop and start operations.
- Workaround
-
This issue is harmless. However, if you have removed the Node Manager files, patching fails. You can restore patching functionality by restoring the Node Manager files you deleted.
Domain sharing middleware displays Needs Attention when other domain is patched
- Details
-
On a managed instance, if a domain is sharing the middleware with another domain and you apply patches to one of the domains, the servers in both the domains are restarted after the middleware is patched. However, the domain that was not patched shows a Needs Attention state. This happens because Node Manager is not started for the domain, so WebLogic Management cannot get the state of domain's servers.
- Workaround
-
Start Node Manager on the managed instance and scan to update servers and domain states.
Server location removed during managed instance scan
- Details
-
If a domain has no listen address for its servers and those servers are spread across nodes, when you scan one node (managed instance) the servers are associated with the correct managed instance when viewing the domain. However, the servers on the other nodes are erased and the managed instances are no longer associated).
- Workaround
-
We recommend you configure a listen address in domains to avoid this issue.
Patch rollback fails with conflicting patches error
- Details
-
When WebLogic Management applies patches it rolls back any patches that are known to conflict with the patches being applied. On rare occasions OPatch removes the patch backup contents from
MW_HOME/.patch_storage
before fully removing the patch. This results in an error and is displayed in the Work Request messages, for example:Request for managed instance with ID ocid1.instance.oc1.iad.<instance-ocid> failed. Response of request: [ Failed to rollback conflicting patches on middleware MW_HOME for domain DOMAIN_NAME. Error is Failed to rollback patch [patch-id-number] with error UtilSession failed: Prerequisite check "CheckRollbackable" failed.. ].
- Workaround
-
To recover, do one of the following:
- Contact Support with this error message to obtain the contents to be replaced.
- Get the missing patch contents from another middleware:
MW_HOME/.patch_storage/<patch id>_<date>
. - Install middleware on another instance, apply the patch, and get the contents of
MW_HOME/.patch_storage/<patch id>_<date>
.
After you have the contents from
.patch_storage
, move the contents into theMW_HOME/.patch_storage
on the instance on which the failure occurred. Then, do one of the following:- Run Apply Latest Patches again.
- Stop all servers and roll back the patch using
MW_HOME/OPatch nrollback -id <patch id>
.
Patching and lifecycle operations fail when both WebLogic and Node Manager are configured to use OCI Secrets
- Details
- If you have a WebLogic Server domain without
boot.properties
and you have set the domain to use OCI secrets for both WebLogic and Node Manager, domain lifecycle operations and the patching restart operation fails. - Workaround
- Use OCI secrets only for WebLogic. Keep Node Manager credentials set to use the domain configuration. Since the Node Manager credentials are always available in
config.xml
, WebLogic Management plugin will read the credentials from there.
Service initiated scan after lifecycle operations gives incorrect results
- Details
-
If you have a WebLogic Server domain without
boot.properties
and you have set the domain to use OCI secrets for WebLogic or Node Manager, the credentials of the secret OCIDs are sent with scan, patch, and other lifecycle operations (start, stop, restart, or rollback).After a lifecycle operation is completed, the service automatically runs a scan operation to get the correct state of all servers, but does not send the credentials of the secret OCIDs. Rather, the scan operation reverts to using domain credentials. If the domain credentials are not accessible, scan results are incorrect.
Note
If the domain hasboot.properties
or the credentials are part of ServerStart, this issue is not seen. - Workaround
- Initiate a scan after a lifecycle operation to get the correct state of servers.