Notifications
Clear all

Universal splunk forwarder as sidecar not showing internal splunk logs

  

0
Topic starter

I have implemented a sidecar container to forward my main application logs to splunk. Have used universal splunk forwarder image. After I deploy both my main application and forwarder seems up and running. But anyway not recieving any logs in splunk index specified. To troubleshoot splunkd log or any specific splunk internal logs are not found in /var/log path. Can someone please help how we enable this splunk internal logs?

1 Answer
0

To enable and troubleshoot the Splunk Universal Forwarder internal logs, follow these steps:

  1. Verify Forwarder Configuration: Ensure that the Splunk Universal Forwarder is correctly configured to send logs to the specified Splunk index. Check the configuration files (inputs.conf, outputs.conf, and props.conf) in the Splunk Universal Forwarder.

  2. Locate Internal Logs: Splunk Universal Forwarder internal logs are typically located in the $SPLUNK_HOME/var/log/splunk directory. If you're not seeing logs there, make sure the environment variable $SPLUNK_HOME is set correctly. By default, it should be /opt/splunkforwarder.

  3. Check Forwarder Status: Use the Splunk CLI to check the status of the forwarder. Run the following command inside your forwarder container:

    bash

    $SPLUNK_HOME/bin/splunk status
  4. Enable Detailed Logging: If the internal logs are not being generated, you might need to increase the logging level for more detailed information. To do this, edit the server.conf file located in $SPLUNK_HOME/etc/system/local/ or the appropriate app directory. Add or modify the following section:

    ini

    [log]
    level = DEBUG

    After modifying the configuration, restart the Splunk forwarder to apply changes:

    bash

    $SPLUNK_HOME/bin/splunk restart
  5. Check Permissions: Ensure that the Splunk forwarder has the necessary permissions to write logs to the /var/log/splunk directory. Run the following command to check and set permissions:

    bash

    chown -R splunk:splunk /var/log/splunk
    chmod -R 755 /var/log/splunk
  6. Inspect Docker Configuration: Since you're running the forwarder in a sidecar container, ensure that the log directory is correctly mounted and that there are no conflicts with Docker volume mounts. Verify your Docker Compose or Kubernetes configuration to ensure correct paths.

  7. Validate Network Configuration: Confirm that the forwarder can communicate with the Splunk indexer. Check for any network issues, firewall rules, or connectivity problems between the forwarder container and the Splunk indexer.

  8. Use Splunk Forwarder Management Interface: If available, access the Splunk forwarder management interface by navigating to http://<forwarder-host>:8089 in your browser. This interface can provide additional diagnostic information.

  9. Examine Deployment Logs: If you're using orchestration tools like Kubernetes, check the pod logs and events for any errors related to the Splunk forwarder deployment.

  10. Consult Splunk Documentation and Community: Refer to the Splunk Universal Forwarder documentation for additional configuration and troubleshooting steps. The Splunk community forums can also be a valuable resource.

By following these steps, you should be able to enable and locate the Splunk internal logs, diagnose the issue, and ensure that logs are being forwarded to your specified Splunk index.

Share: