To document how serilog and elastic stack instalation is done and how integration between the two is achieved
- How to integrate and configure Serilog/Elastic Stack in an application
- How can the sent data be controlled?
- Can the same application send information to multiple indexes?
- How will the logs from the various applications be displayed?
- How detailed are the exceptions?
- Compare against Exceptionless (confirm user is present and stack trace)
- Installation Guide for Elastic Stack in Self Hosted Environment
- Pre-requisites
- Configuring the Elastic Stack
- Ingesting Data
- Alerting with Elastic Stack
- Dashboards and Canvas
- Backups
- Results
- References
- Serilog - Elastic Stack integration
- Configuring Serilog
- Offline environment
- Every docker command was done through
putty
with the userappuser
- Every docker command was prefixed with the
sudo
command, from this point onward it'll be implicit
- Disk Space - 1,5Gb minimum
- RAM - 8Gb RAM minimum
Source: Hardware Prerequisites
Get docker compose from hub
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Source: Install Docker Compose
- Create a folder named
ElasticStack
in /srv - Move your docker compose file and configs to that folder
- Give
777
permissions recurssively every file/folder inside the created folder - Edit the docker-compose file and add
xpack.security.enabled=true
to the environment variables forelasticsearch
- Run the command:
docker-compose up -d
-
Enter the ElasticSearch container with the command
docker exec -it elasticsearch bash
-
Navigate to the folder
./bin
-
Run
./elasticsearch-setup-passwords interactive
- When prompted, insert the passwords to be used by the default system users
-
Leave the docker container by using
exit
-
Enter the Kibana container with the command
docker exec -it kibana bash
-
Navigate to the folter
./config
-
Edit the file
kibana.yml
with the commandvi kibana.yml
-
Add the following lines:
elasticsearch.username: "kibana_system" elasticsearch.password: "the_password_entered_for_the_user_kibana_system"
-
Leave the docker container by using
exit
-
Restart the Kibana container by using
docker restart kibana
- When deploying to a production environment edit the
docker-compose.yml
file and add to thekibana > Environment
the following:SERVER_PUBLICBASEURL: URL_FOR_KIBANA:PORT
Note
- The publicly available URL that end-users access Kibana at.
- Must include the protocol, hostname, port (if different than the defaults for http and https, 80 and 443 respectively), and the server.basePath (if configured).
- This setting cannot end in a slash (/).
Source: Configure Kibana
- Elastic Search is in port 9200
- Kibana is configured in port 5601.
- The default user is 'elastic' and password is the one configured during the configuration of the ElasticStack - Kibana comunication
- On the left side menu, under
Management
, accessStack Management
- Go to
Security
>Roles
and create a role with index privileges - Go to
Security
> Users and create a user with the previously created role
- Using a management account click the colored letter in the top left corner and select
Manage Spaces
- Create a space for the group
- Go to
Security
>Roles
and create a group that has assigned specific indexes and spaces - Set the appropriate privileges
- Go to
Security
>Users
and create a user that is assigned to the created role.
- To validate the login use:
curl -u <user_here> 'http://SERVER:PORT/_xpack/security/_authenticate?pretty'
- When prompted input the password for the provided user
In order to start using the Elastic Stack we need data from our applications. Data can be added by creating an index for our project that will be responsible for acting as a repository for all related data to that project.
- Go to
Management
>Stack Management
>Index Patterns
- Click
Create Index Pattern
- Fill out the
Name
formula to filter the results on the right table until you find the desired project - Select the field that will serve as the
timestamp
(default:@timestamp
) - Click
Create Index Pattern
After this step all data collected that is related to the created index will be available in the Discover
and Dashboard
sections for analysis
Note: Conventions regarding Indexes
, Environment
and Release
should be discussed with the Team prior to implementation in order to define a definitive convention.
- Build used: 7.9.2
- Warning Requires gold license (30-day free trial available)
- Edit the docker-compose file and add
xpack.security.authc.api_key.enabled=true
to the elasticsearch environment variables - Enter the Kibana container with the command
docker exec -it kibana bash
- Navigate to
./config
- Edit the
kibana.yml
file and add:xpack.encryptedSavedObjects.encryptionKey: "[32 character key]"
- To generate this key is best to use a password generator
- Take note of this key
- Go to
Stack Management
- On the left bar search for and click
Alerts and Actions
- Define your e-mail properties as if you were configuring an e-mail client.
- Go to
Stack Management
- On the left bar search for and click
Alerts and Actions
- Create your alert by filling out the form with information relative to the action/alert you wish to create as well as the communication path
Note - These steps require the step Injecting Data to be completed first.
- Go to the
Dashboard
section - Click
Create visualization
- On top left corner select the desired index
- Build your widget by dragging one or more available fields to the Horizontal/Vertical Axis in order to populate the graph/table with data.
- Go to the
Canvas
section - Click
Create workpad
- On top left corner click
Add element
and add the desired element - On the right pane select
Data
, then selectElasticsearch SQL
orElasticsearch Documents
- Select the desired
index
andfields
- Again on the right pane select
Display
- Build your element by selecting one or more available fields in order to populate the graph/table with data.
Note: Exporting for dashboards or canvas to files is limited by licensing
- CSV and JSON > Free/Basic
- PDF and PNG > Gold or above
Source: Elastic Stack subscriptions
As mentioned in Snapshot and Restore, the Elastic Stack does not have a direct export mechanism. It does instead support a Snapshot mechanism where all files and containers are exported to a off-site location.
The supported locations are:
- AWS S3
- Google Cloud Storage (GCS)
- Hadoop Distributed File System (HDFS)
- Microsoft Azure
In order to make a backup for our use case it would be necessary to backup and restore all Docker related volumes defined at install time. - See Backup, restore, or migrate data volumes
- Although it's possible to create an alert, its not possible to create an alert for specific events like
Fatal Error
warnings or when a given type of exception happens - While Elasctic says it is possible to make the user logins and encryption working together, the documentation regarding that is very lacking which resulted in only being able to configure the system with alerts or login, but not both at the same time.
- SSL certificates and an ecryption key can be used if you are not using login credentials. If you wan't to use credentials TLS is needded.
- https://codingfundas.com/how-to-install-elasticsearch-7-with-kibana-using-docker-compose/index.html
- Setting up Elasticsearch and Kibana on Docker with X-Pack security enabled
- Alerting and Actions
- Email actions
This project aims to be a starting point on how to implement SeriLog in combination with the Elastic Stack on a solution. Thanks to this it's possible to log events that occur on the system via serilog and pass these logs to Elasctic Search and Kibana for later analysis.
Mandatory
Serilog
/Serilog.AspNetCore
- Base Serilog package. Install the appropriate nuget for the type of application you are developingSerilog.Sinks.Elasticsearch
- Nuget that adds the support for exporting information to the Elastic Stack
Recommended
Serilog.Sinks.File
- Export to fileSerilog.Sinks.Console
- Export to console
Optional
Enrichers - These packages are add-ons that complement SeriLog's logging capabilities adding to it new functions like Thread/Process ID logging among others.
Serilog.Enrichers.AssemblyName
- Adds the assembly nameSerilog.Enrichers.ClientInfo
- Adds user agentSerilog.Enrichers.Demystifier
- Better presentation of async requests and stack tracesSerilog.Enrichers.GlobalLogContext
- Adds properties to all log eventsSerilog.Enrichers.Process
- Adds the process Id and Enviroment NameSerilog.Exceptions
- Adds exception details and custom propertiesSerilogWeb.Classic
- Adds details related with HTTP requestsSerilogWeb.Classic.MVC
- Adds details related with mvc logic like controller and action names to the log
-
Collected by Enrichers
Date
Error type (exception type)
Message
HTTP Method
Machine Name
User agent
Stack trace
- Only shows up when not haddled exceptions happen
-
Manually Injected
IP
- InjectedProject Name
- InjectedOperating System
- Injected (OS Name, Version and Build)User
- InjectedDomain
- Injected
-
Cannot be collected
Browser
- Although we can collect theUser Agent
we could not get a specific browser version
Below app.UseStaticFiles();
insert app.UseSerilogRequestLogging();
, this will tell your software to use serilog as its logging service. (You may need to add Serilog to your dependencies)
- Inside
public static void Main(string[] args)
add:
try
{
// Tries to start the application
CreateHostBuilder(args).Build().Run();
}
catch (Exception exc)
{
Log.Fatal("App failed to start");
Log.Fatal(exc.Message);
}
finally
{
//Forces the system to register all pending logs before closing
Log.CloseAndFlush();
}
- On
public static IHostBuilder CreateHostBuilder(string[] args)
, delete contents and add:
public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args)
.UseSerilog((context, configuration) =>
{
configuration.Enrich.FromLogContext()
//User Information
.Enrich.WithProperty("Timestamp: ", DateTime.UtcNow)
.Enrich.WithMachineName()
.Enrich.WithUserName()
.Enrich.WithEnvironmentUserName()
//Client Information
.Enrich.WithProperty("OS: ", (string)Registry.GetValue(@"HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Windows NT\CurrentVersion", "ProductName", null))
.Enrich.WithProperty("OS ID: ", Registry.GetValue(@"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion", "ReleaseId", "").ToString())
.Enrich.WithProperty("OS Build: ", Environment.OSVersion.Version)
.Enrich.WithProperty("IP Address: ", Dns.GetHostAddresses(Dns.GetHostName()))
.Enrich.WithProperty("Architecture: ", System.Runtime.InteropServices.RuntimeInformation.ProcessArchitecture)
.Enrich.WithProperty("Domain Name: ", IPGlobalProperties.GetIPGlobalProperties().DomainName)
.Enrich.WithClientAgent()
//Application Information
.Enrich.WithProperty("Application Name: ", Assembly.GetExecutingAssembly().GetName().Name)
.Enrich.WithEnvironmentName()
.Enrich.WithHttpRequestUserAgent()
.Enrich.WithProcessId()
.Enrich.WithEnvironmentName()
//Request/Action Information
.Enrich.WithMvcActionName()
.Enrich.WithMvcControllerName()
.Enrich.WithHttpRequestType()
.Enrich.WithHttpRequestUrl()
//Error Information
.Enrich.WithExceptionDetails()
.Enrich.WithDemystifiedStackTraces()
//Writing Locations
.WriteTo.Console()
.WriteTo.File("Logs\\Log.txt")
.WriteTo.File("Logs\\Log.json")
.WriteTo.Elasticsearch(new Serilog.Sinks.Elasticsearch.ElasticsearchSinkOptions(new Uri(context.Configuration["ElasticConfiguration:Uri"])) //Gets the url for Elastic export from the appsettings
{
//Indexing format for log analysis
IndexFormat = $"{context.Configuration["ApplicationName"]}-logs-{context.HostingEnvironment.EnvironmentName?.ToLower().Replace(".", "-")}-{DateTime.UtcNow:yyyy-MM}",
AutoRegisterTemplate = true,
NumberOfReplicas = 1,
NumberOfShards = 2,
})
.Enrich.WithProperty("Environment", context.HostingEnvironment.EnvironmentName)
.ReadFrom.Configuration(context.Configuration); //Read the configurations from the appsettings
})
.ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); });
}
Notes
- When creating a custom property avoid using spacing as this will prevent queries from working properly (ex: use
ApplicationName
instead ofApplication Name
) - Any line starting with
.Enrich
or.WriteTo
can be removed if not necessary as these collect optional informations or define alternative export paths
- Replace the base logging configurations with:
{
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Fatal",
"System": "Fatal"
}
}
},
Note: These verbosity levels can and should be adjusted according to the ammount of information you wish to receive in Kibana's log board. See Log Levels for more info on verbosity levels
- Add:
"ElasticConfiguration": {
"Uri": "http://localhost:9200"
},
Note : When deploying the url should be replaced with the url for the docker server running the Elastic Stack
Verbose
- Verbose is the noisiest level, rarely (if ever) enabled for a production app.
Debug
- Debug is used for internal system events that are not necessarily observable from the outside, but useful when determining how something happened.
Information
- Information events describe things happening in the system that correspond to its responsibilities and functions. Generally these are the observable actions the system can perform.
Warning
- When service is degraded, endangered, or may be behaving outside of its expected parameters, Warning level events are used.
Error
- When functionality is unavailable or expectations broken, an Error event is used.
Fatal
- The most critical level, Fatal events demand immediate attention.
using Serilog;
public void LogService(Exception exc)
{
Log.Error($"User: {HttpContext.User.Identity.Name}, triggered: {exc.Message}");
Log.CloseAndFlush();
}
-
Appart from the data gathered by the enrichers (which don't always work) its possible to add additional information to the logs by using the .Enrich.WithProperty("name", value) tag. These allow us to create middleware methods that will inject data into the logs like the Logged user name which usually is stored in the httpcontext which is not running when an application is started.
-
Another issue that was detected was that some of the enrichers only populate when certain actions happen which is the case of the stack trace that only seems to appear when an unhandled error happens.
-
Some enrichers may also behave like this, in that case you might want to replace their function with a middleware enricher or a native function that returns the value you are looking for and pass it to the logger as a custom property
- Logs include the stack trace of a given error and are complemented by data colleted by the enrichers to provide a detailed list of information that can be used as a starting point into understanding an issue.
- As mentioned in "Controlling Data" it was detected that some of the enrichers only populate when certain actions happen which is the case of the stack trace that only seems to appear when an unhandled error happens.
- When the application itself is configured, its possible to define an index format that will identify the application in kibana's dashboard. This allows us to filter the list and get the logs just for the application in question.
-
In a short answer: yes, it's possible.
-
Althouth the solution cannot be considered clean code, it's possible to push data to multiple index from one application. For this to work, when coding the application itself you just have to duplicate the lines responsible by the exporting to the Elastic Stack. These are the only ones that change while the rest of the configuration remains the same.
In Program.cs
In appsettings.json
- Serilog doesn't have any mechanism that allows the sending of pending logs when offilne. Although not ideal serilog supports local logging so the data it's not lost.
- Serilog provides a way to write to a local file if the connection to the server fails when trying to send the logs.
- This is achieved through the EmitEventFailure property with the EmitEventFailureHandling.WriteToFailureSink option and configuring the FailureSink
Source: Serilog - Handling Errors
- Serilog provides a way to configure multiple endpoints to write data and will try the next endpoint in case the first one fails to respond
Source: Configure the Sink
- Even though the logs are not lost, when elasticsearch goes online again every log needs to be inserted and since serilog doesn't provide any mechanism for this effect, it would have to be developed.
- Logging into Elasticsearch using Serilog and viewing logs in Kibana | .NET Core Tutorial by Nick Chapsas
- SeriLog - Flexible, structured events โ log file convenience
- Serilog Best Practices by Ben Foster
- Serilog Do's and Don'ts by Eric St-Georges
- List of official Serilog sinks
- How to enrich a log with Serilog enrichers
- Creating a User Name Middleware for enrichment of Asp .Net Core logs