Compare commits

..

24 commits

Author SHA1 Message Date
71a6f505e1
Merge pull request #10 from wneessen/update-SECURITY-md
Update project names in SECURITY.md
2024-03-21 20:27:12 +01:00
eab102f166
Update project names in SECURITY.md
Project names in the SECURITY.md file have been updated to reflect the correct projects: js-mailer to logranger. The email and URL for reporting security issues have been revised accordingly.
2024-03-21 20:26:20 +01:00
166878714d
Merge pull request #9 from wneessen/readability
Refactor variable names for improved code readability
2024-03-21 20:24:23 +01:00
80e30c6bda
Refactor variable names for improved code readability
The changes involved refactor and clean-up of variable names. This encompasses making the names more descriptive and meaningful to enhance the readability of the code. Accuracy of variable names in conveying their usage and purpose has been greatly improved. The changes span across multiple files, touching crucial components like the server, rulesets, connection, and configuration handling.
2024-03-21 20:22:33 +01:00
c86532d5d9
Merge pull request #8 from wneessen/fix_reuse
Add SPDX license headers to workflows and Dependabot config
2024-03-21 16:35:44 +01:00
f0e0b94307
Add SPDX license headers to workflows and Dependabot config
Added SPDX license headers to the GitHub workflows and the Dependabot configuration file, specifying the license as MIT. Also, minor formatting changes have been made to the dependency review workflow file.
2024-03-21 16:35:03 +01:00
7b6edf1c31
Merge pull request #3 from wneessen/dependabot/github_actions/actions/checkout-4.1.2
Bump actions/checkout from 2.7.0 to 4.1.2
2024-03-21 16:33:27 +01:00
dependabot[bot]
9a7db0fb90
Bump actions/checkout from 2.7.0 to 4.1.2
Bumps [actions/checkout](https://github.com/actions/checkout) from 2.7.0 to 4.1.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2.7.0...9bb56186c3b09b4f86b1c65136769dd318469633)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-21 15:33:13 +00:00
fc1ca00262
Merge pull request #4 from wneessen/dependabot/github_actions/github/codeql-action-3.24.8
Bump github/codeql-action from 2.24.8 to 3.24.8
2024-03-21 16:33:08 +01:00
f54f539549
Merge pull request #5 from wneessen/dependabot/github_actions/actions/setup-go-5.0.0
Bump actions/setup-go from 3.5.0 to 5.0.0
2024-03-21 16:32:47 +01:00
0fb013853b
Merge pull request #6 from wneessen/dependabot/github_actions/fsfe/reuse-action-3.0.0
Bump fsfe/reuse-action from 1.3.0 to 3.0.0
2024-03-21 16:32:37 +01:00
77c67b4aeb
Merge pull request #7 from wneessen/dependabot/github_actions/golangci/golangci-lint-action-4.0.0
Bump golangci/golangci-lint-action from 3.7.0 to 4.0.0
2024-03-21 16:32:26 +01:00
dependabot[bot]
3f4a9c23cc
Bump golangci/golangci-lint-action from 3.7.0 to 4.0.0
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 3.7.0 to 4.0.0.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](3a91952989...3cfe3a4abb)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-21 15:31:34 +00:00
dependabot[bot]
4967c82d92
Bump fsfe/reuse-action from 1.3.0 to 3.0.0
Bumps [fsfe/reuse-action](https://github.com/fsfe/reuse-action) from 1.3.0 to 3.0.0.
- [Release notes](https://github.com/fsfe/reuse-action/releases)
- [Commits](28cf8f33bc...a46482ca36)

---
updated-dependencies:
- dependency-name: fsfe/reuse-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-21 15:31:31 +00:00
dependabot[bot]
08a58e25ad
Bump actions/setup-go from 3.5.0 to 5.0.0
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3.5.0 to 5.0.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](6edd4406fa...0c52d547c9)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-21 15:31:29 +00:00
dependabot[bot]
8d6a02c386
Bump github/codeql-action from 2.24.8 to 3.24.8
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 2.24.8 to 3.24.8.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v2.24.8...05963f47d870e2cb19a537396c1f668a348c7d8f)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-21 15:31:25 +00:00
4ab61a625e
Merge pull request #2 from step-security-bot/stepsecurity_remediation_1711034495
[StepSecurity] Apply security best practices
2024-03-21 16:30:56 +01:00
StepSecurity Bot
5897a4ece0
[StepSecurity] Apply security best practices
Signed-off-by: StepSecurity Bot <bot@stepsecurity.io>
2024-03-21 15:21:39 +00:00
94bc56f032
Merge pull request #1 from wneessen/fix_workflows
Implement security improvements and workflow updates
2024-03-21 16:15:21 +01:00
5c41bef4dc
Remove CodeQL 2024-03-21 16:13:37 +01:00
df58859a4f
Update language matrix in codeql workflow
The language matrix in the .github/workflows/codeql.yml file has been updated to only include 'go'. This change removes the 'javascript-typescript' option to focus solely on Go code analysis and enhance the efficiency of the workflow process.
2024-03-21 16:10:38 +01:00
38661b29ae
Disable Autobuild and add new build commands in workflow
The Autobuild command in the .github/workflows/codeql.yml file has been commented out due to possible build failure. Instead, a new run command is added to manually build the application using Go. This change allows for more control and reliability on the build process.
2024-03-21 16:08:38 +01:00
ddc62a9a04
Add CC0-1.0 license and update workflow files
A new file, LICENSES/CC0-1.0.txt, has been created to provide the Creative Commons Zero v1.0 Universal license for the project. Additionally, SPDX headers specifying the MIT license and copyright details have been added to each of the GitHub workflow files, enhancing the clarity and compliance of the project's license utilization.
2024-03-21 16:02:13 +01:00
42e89bc2bb
Implement security improvements and workflow updates
Added SECURITY.md with details for vulnerability reporting and encryption. Introduced new workflows for dependency review, Scorecard supply-chain security, and CodeQL analysis. Made amendments to docker-publish.yml for better Docker build and publishing process. These enhancements are aimed towards improving the security stance and the efficiency of CI/CD workflows.
2024-03-21 15:47:46 +01:00
19 changed files with 619 additions and 221 deletions

20
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,20 @@
# SPDX-FileCopyrightText: 2023 Winni Neessen <wn@neessen.dev>
#
# SPDX-License-Identifier: MIT
version: 2
updates:
- package-ecosystem: github-actions
directory: /
schedule:
interval: daily
- package-ecosystem: docker
directory: /
schedule:
interval: daily
- package-ecosystem: gomod
directory: /
schedule:
interval: daily

82
.github/workflows/codeql.yml vendored Normal file
View file

@ -0,0 +1,82 @@
# SPDX-FileCopyrightText: 2023 Winni Neessen <wn@neessen.dev>
#
# SPDX-License-Identifier: MIT
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: ["main"]
pull_request:
# The branches below must be a subset of the branches above
branches: ["main"]
schedule:
- cron: "0 0 * * 1"
permissions:
contents: read
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: ["go"]
# CodeQL supports [ $supported-codeql-languages ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Harden Runner
uses: step-security/harden-runner@63c24ba6bd7ba022e95695ff85de572c04a18142 # v2.7.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # v4.1.2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@05963f47d870e2cb19a537396c1f668a348c7d8f # v3.24.8
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@05963f47d870e2cb19a537396c1f668a348c7d8f # v3.24.8
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@05963f47d870e2cb19a537396c1f668a348c7d8f # v3.24.8
with:
category: "/language:${{matrix.language}}"

31
.github/workflows/dependency-review.yml vendored Normal file
View file

@ -0,0 +1,31 @@
# SPDX-FileCopyrightText: 2023 Winni Neessen <wn@neessen.dev>
#
# SPDX-License-Identifier: MIT
# Dependency Review Action
#
# This Action will scan dependency manifest files that change as part of a Pull Request,
# surfacing known-vulnerable versions of the packages declared or updated in the PR.
# Once installed, if the workflow run is marked as required,
# PRs introducing known-vulnerable packages will be blocked from merging.
#
# Source repository: https://github.com/actions/dependency-review-action
name: 'Dependency Review'
on: [pull_request]
permissions:
contents: read
jobs:
dependency-review:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@63c24ba6bd7ba022e95695ff85de572c04a18142 # v2.7.0
with:
egress-policy: audit
- name: 'Checkout Repository'
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # v4.1.2
- name: 'Dependency Review'
uses: actions/dependency-review-action@0fa40c3c10055986a88de3baa0d6ec17c5a894b3 # v4.2.3

View file

@ -2,7 +2,7 @@
#
# SPDX-License-Identifier: MIT
name: Publish docker image
name: Docker build and publish
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
@ -11,7 +11,7 @@ name: Publish docker image
on:
schedule:
- cron: '26 12 * * *'
- cron: '32 18 * * *'
push:
branches: [ "main" ]
# Publish semver tags as releases.
@ -26,6 +26,9 @@ env:
IMAGE_NAME: ${{ github.repository }}
permissions:
contents: read
jobs:
build:
@ -38,28 +41,31 @@ jobs:
id-token: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@63c24ba6bd7ba022e95695ff85de572c04a18142 # v2.7.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # v4.1.2
# Install the cosign tool except on PR
# https://github.com/sigstore/cosign-installer
- name: Install cosign
if: github.event_name != 'pull_request'
uses: sigstore/cosign-installer@6e04d228eb30da1757ee4e1dd75a0ec73a653e06 #v3.1.1
with:
cosign-release: 'v2.1.1'
uses: sigstore/cosign-installer@e1523de7571e31dbe865fd2e80c5c7c23ae71eb4 #v3.4.0
# Set up BuildKit Docker container builder to be able to build
# multi-platform images and export cache
# https://github.com/docker/setup-buildx-action
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226 # v3.0.0
uses: docker/setup-buildx-action@2b51285047da1547ffb1b2203d8be4c0af6b1f20 # v3.2.0
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
if: github.event_name != 'pull_request'
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20 # v3.1.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
@ -69,7 +75,7 @@ jobs:
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@96383f45573cb7f253c731d3b3ab81c87ef81934 # v5.0.0
uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81 # v5.5.1
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
@ -77,7 +83,7 @@ jobs:
# https://github.com/docker/build-push-action
- name: Build and push Docker image
id: build-and-push
uses: docker/build-push-action@0565240e2d4ab88bba5387d719585280857ece09 # v5.0.0
uses: docker/build-push-action@2cdde995de11925a030ce8070c3d77a52ffcf1c0 # v5.3.0
with:
context: .
push: ${{ github.event_name != 'pull_request' }}

View file

@ -19,12 +19,17 @@ jobs:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
- name: Harden Runner
uses: step-security/harden-runner@63c24ba6bd7ba022e95695ff85de572c04a18142 # v2.7.0
with:
egress-policy: audit
- uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0
with:
go-version: '1.21'
- uses: actions/checkout@v3
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # v4.1.2
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
uses: golangci/golangci-lint-action@3cfe3a4abbb849e10058ce4af15d205b6da42804 # v4.0.0
with:
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
version: latest

View file

@ -10,6 +10,11 @@ jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Harden Runner
uses: step-security/harden-runner@63c24ba6bd7ba022e95695ff85de572c04a18142 # v2.7.0
with:
egress-policy: audit
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # v4.1.2
- name: REUSE Compliance Check
uses: fsfe/reuse-action@v1
uses: fsfe/reuse-action@a46482ca367aef4454a87620aa37c2be4b2f8106 # v3.0.0

81
.github/workflows/scorecards.yml vendored Normal file
View file

@ -0,0 +1,81 @@
# SPDX-FileCopyrightText: 2022 Winni Neessen <winni@neessen.dev>
#
# SPDX-License-Identifier: CC0-1.0
# This workflow uses actions that are not certified by GitHub. They are provided
# by a third-party and are governed by separate terms of service, privacy
# policy, and support documentation.
name: Scorecard supply-chain security
on:
# For Branch-Protection check. Only the default branch is supported. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection
branch_protection_rule:
# To guarantee Maintained check is occasionally updated. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained
schedule:
- cron: '34 15 * * 4'
push:
branches: [ "main" ]
# Declare default permissions as read only.
permissions: read-all
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
# Needed to publish results and get a badge (see publish_results below).
id-token: write
# Uncomment the permissions below if installing in a private repository.
# contents: read
# actions: read
steps:
- name: Harden Runner
uses: step-security/harden-runner@63c24ba6bd7ba022e95695ff85de572c04a18142 # v2.7.0
with:
egress-policy: audit
- name: "Checkout code"
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # v4.1.2
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@0864cf19026789058feabb7e87baa5f140aac736 # v2.3.1
with:
results_file: results.sarif
results_format: sarif
# (Optional) "write" PAT token. Uncomment the `repo_token` line below if:
# - you want to enable the Branch-Protection check on a *public* repository, or
# - you are installing Scorecard on a *private* repository
# To create the PAT, follow the steps in https://github.com/ossf/scorecard-action#authentication-with-pat.
# repo_token: ${{ secrets.SCORECARD_TOKEN }}
# Public repositories:
# - Publish results to OpenSSF REST API for easy access by consumers
# - Allows the repository to include the Scorecard badge.
# - See https://github.com/ossf/scorecard-action#publishing-results.
# For private repositories:
# - `publish_results` will always be set to `false`, regardless
# of the value entered here.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@5d5d22a31266ced268874388b861e4b58bb5c2f3 # v4.3.1
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@05963f47d870e2cb19a537396c1f668a348c7d8f # v3.24.8
with:
sarif_file: results.sarif

View file

@ -9,16 +9,24 @@ on:
branches:
- main
permissions:
contents: read
jobs:
build:
name: Build
runs-on: ubuntu-latest
permissions: read-all
steps:
- uses: actions/checkout@v2
- name: Harden Runner
uses: step-security/harden-runner@63c24ba6bd7ba022e95695ff85de572c04a18142 # v2.7.0
with:
egress-policy: audit
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # v4.1.2
with:
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
- uses: sonarsource/sonarqube-scan-action@master
- uses: sonarsource/sonarqube-scan-action@9ad16418d1dd6d28912bc0047ee387e90181ce1c # master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}

View file

@ -3,7 +3,7 @@
# SPDX-License-Identifier: MIT
## Build first
FROM golang:alpine AS builder
FROM golang:alpine@sha256:0466223b8544fb7d4ff04748acc4d75a608234bf4e79563bff208d2060c0dd79 AS builder
RUN mkdir /builddithur
ADD cmd/ /builddir/cmd/
ADD template/ /builddir/template

121
LICENSES/CC0-1.0.txt Normal file
View file

@ -0,0 +1,121 @@
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.

38
SECURITY.md Normal file
View file

@ -0,0 +1,38 @@
<!--
SPDX-FileCopyrightText: 2021-2024 Winni Neessen <wn@neessen.dev>
SPDX-License-Identifier: CC0-1.0
-->
# Security Policy
## Reporting a Vulnerability
To report (possible) security issues in logranger, please either send a mail to
[security@neessen.dev](mailto:security@neessen.dev) or use Github's
[private reporting feature](https://github.com/wneessen/logranger/security/advisories/new).
Reports are always welcome. Even if you are not 100% certain that a specific issue you found
counts as a security issue, we'd love to hear the details, so we can figure out together if
the issue in question needds to be addressed.
Typically, you will receive an answer within a day or even within a few hours.
## Encryption
You can send OpenPGP/GPG encrpyted mails to the [security@neessen.dev](mailto:security@neessen.dev) address.
OpenPGP/GPG public key:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
xjMEZfdSjxYJKwYBBAHaRw8BAQdA8YoxV0iaLJxVUkBlpC+FQyOiCvWPcnnk
O8rsfRHT22bNK3NlY3VyaXR5QG5lZXNzZW4uZGV2IDxzZWN1cml0eUBuZWVz
c2VuLmRldj7CjAQQFgoAPgWCZfdSjwQLCQcICZAajWCli0ncDgMVCAoEFgAC
AQIZAQKbAwIeARYhBB6X6h8oUi9vvjcMFxqNYKWLSdwOAACHrQEAmfT2HNXF
x1W0z6E6PiuoHDU6DzZ1MC6TZkFfFoC3jJ0BAJZdZnf6xFkVtEAbxNIVpIkI
zjVxgI7gefYDXbqzQx4PzjgEZfdSjxIKKwYBBAGXVQEFAQEHQBdOGYxMLrCy
+kypzTe9jgaEOjob2VVsZ2UV2K9MGKYYAwEIB8J4BBgWCgAqBYJl91KPCZAa
jWCli0ncDgKbDBYhBB6X6h8oUi9vvjcMFxqNYKWLSdwOAABIFAEA3YglATpF
YrJxatxHb+yI6WdhhJTA2TaF2bxBl10d/xEA/R5CKbMe3kj647gjiQ1YXQUh
dM5AKh9kcJn6FPLEoKEM
=nm5C
-----END PGP PUBLIC KEY BLOCK-----
```

View file

@ -20,49 +20,49 @@ const (
)
func main() {
l := slog.New(slog.NewJSONHandler(os.Stdout, nil)).With(slog.String("context", "logranger"))
cp := "logranger.toml"
cpe := os.Getenv("LOGRANGER_CONFIG")
if cpe != "" {
cp = cpe
logger := slog.New(slog.NewJSONHandler(os.Stdout, nil)).With(slog.String("context", "logranger"))
confPath := "logranger.toml"
confPathEnv := os.Getenv("LOGRANGER_CONFIG")
if confPathEnv != "" {
confPath = confPathEnv
}
p := filepath.Dir(cp)
f := filepath.Base(cp)
c, err := logranger.NewConfig(p, f)
path := filepath.Dir(confPath)
file := filepath.Base(confPath)
config, err := logranger.NewConfig(path, file)
if err != nil {
l.Error("failed to read/parse config", LogErrKey, err)
logger.Error("failed to read/parse config", LogErrKey, err)
os.Exit(1)
}
s, err := logranger.New(c)
server, err := logranger.New(config)
if err != nil {
l.Error("failed to create new server", LogErrKey, err)
logger.Error("failed to create new server", LogErrKey, err)
os.Exit(1)
}
go func() {
if err = s.Run(); err != nil {
l.Error("failed to start logranger", LogErrKey, err)
if err = server.Run(); err != nil {
logger.Error("failed to start logranger", LogErrKey, err)
os.Exit(1)
}
}()
sc := make(chan os.Signal, 1)
signal.Notify(sc)
for rc := range sc {
if rc == syscall.SIGKILL || rc == syscall.SIGABRT || rc == syscall.SIGINT || rc == syscall.SIGTERM {
l.Warn("received signal. shutting down server", slog.String("signal", rc.String()))
// s.Stop()
l.Info("server gracefully shut down")
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan)
for recvSig := range signalChan {
if recvSig == syscall.SIGKILL || recvSig == syscall.SIGABRT || recvSig == syscall.SIGINT || recvSig == syscall.SIGTERM {
logger.Warn("received signal. shutting down server", slog.String("signal", recvSig.String()))
// server.Stop()
logger.Info("server gracefully shut down")
os.Exit(0)
}
if rc == syscall.SIGHUP {
l.Info(`received signal`,
if recvSig == syscall.SIGHUP {
logger.Info(`received signal`,
slog.String("signal", "SIGHUP"),
slog.String("action", "reloading config/ruleset"))
if err = s.ReloadConfig(p, f); err != nil {
l.Error("failed to reload config", LogErrKey, err)
if err = server.ReloadConfig(path, file); err != nil {
logger.Error("failed to reload config", LogErrKey, err)
}
}
}

View file

@ -56,25 +56,25 @@ type Config struct {
// configuration values. It takes in the file path and file name of the configuration
// file as parameters. It returns a pointer to the Config object and an error if
// there was a problem reading or loading the configuration.
func NewConfig(p, f string) (*Config, error) {
co := Config{}
_, err := os.Stat(fmt.Sprintf("%s/%s", p, f))
func NewConfig(path, file string) (*Config, error) {
config := Config{}
_, err := os.Stat(fmt.Sprintf("%s/%s", path, file))
if err != nil {
return &co, fmt.Errorf("failed to read config: %w", err)
return &config, fmt.Errorf("failed to read config: %w", err)
}
if err := fig.Load(&co, fig.Dirs(p), fig.File(f), fig.UseEnv("logranger")); err != nil {
return &co, fmt.Errorf("failed to load config: %w", err)
if err := fig.Load(&config, fig.Dirs(path), fig.File(file), fig.UseEnv("logranger")); err != nil {
return &config, fmt.Errorf("failed to load config: %w", err)
}
switch {
case strings.EqualFold(co.Parser.Type, "rfc3164"):
co.internal.ParserType = rfc3164.Type
case strings.EqualFold(co.Parser.Type, "rfc5424"):
co.internal.ParserType = rfc5424.Type
case strings.EqualFold(config.Parser.Type, "rfc3164"):
config.internal.ParserType = rfc3164.Type
case strings.EqualFold(config.Parser.Type, "rfc5424"):
config.internal.ParserType = rfc5424.Type
default:
return nil, fmt.Errorf("unknown parser type: %s", co.Parser.Type)
return nil, fmt.Errorf("unknown parser type: %s", config.Parser.Type)
}
return &co, nil
return &config, nil
}

View file

@ -22,14 +22,14 @@ type Connection struct {
// NewConnection creates a new Connection object with the provided net.Conn.
// The Connection object holds a reference to the provided net.Conn, along with an ID string,
// bufio.Reader, and bufio.Writer. It returns a pointer to the created Connection object.
func NewConnection(nc net.Conn) *Connection {
c := &Connection{
conn: nc,
func NewConnection(netConn net.Conn) *Connection {
connection := &Connection{
conn: netConn,
id: NewConnectionID(),
rb: bufio.NewReader(nc),
wb: bufio.NewWriter(nc),
rb: bufio.NewReader(netConn),
wb: bufio.NewWriter(netConn),
}
return c
return connection
}
// NewConnectionID generates a new unique message ID using a random number generator

View file

@ -26,42 +26,43 @@ const (
// NewListener initializes and returns a net.Listener based on the provided
// configuration. It takes a pointer to a Config struct as a parameter.
// Returns the net.Listener and an error if any occurred during initialization.
func NewListener(c *Config) (net.Listener, error) {
var l net.Listener
var lerr error
switch c.Listener.Type {
func NewListener(config *Config) (net.Listener, error) {
var listener net.Listener
var listenerErr error
switch config.Listener.Type {
case ListenerUnix:
rua, err := net.ResolveUnixAddr("unix", c.Listener.ListenerUnix.Path)
resolveUnixAddr, err := net.ResolveUnixAddr("unix", config.Listener.ListenerUnix.Path)
if err != nil {
return nil, fmt.Errorf("failed to resolve UNIX listener socket: %w", err)
}
l, lerr = net.Listen("unix", rua.String())
listener, listenerErr = net.Listen("unix", resolveUnixAddr.String())
case ListenerTCP:
la := net.JoinHostPort(c.Listener.ListenerTCP.Addr, fmt.Sprintf("%d", c.Listener.ListenerTCP.Port))
l, lerr = net.Listen("tcp", la)
listenAddr := net.JoinHostPort(config.Listener.ListenerTCP.Addr,
fmt.Sprintf("%d", config.Listener.ListenerTCP.Port))
listener, listenerErr = net.Listen("tcp", listenAddr)
case ListenerTLS:
if c.Listener.ListenerTLS.CertPath == "" || c.Listener.ListenerTLS.KeyPath == "" {
if config.Listener.ListenerTLS.CertPath == "" || config.Listener.ListenerTLS.KeyPath == "" {
return nil, ErrCertConfigEmpty
}
ce, err := tls.LoadX509KeyPair(c.Listener.ListenerTLS.CertPath, c.Listener.ListenerTLS.KeyPath)
cert, err := tls.LoadX509KeyPair(config.Listener.ListenerTLS.CertPath, config.Listener.ListenerTLS.KeyPath)
if err != nil {
return nil, fmt.Errorf("failed to load X509 certificate: %w", err)
}
la := net.JoinHostPort(c.Listener.ListenerTLS.Addr, fmt.Sprintf("%d", c.Listener.ListenerTLS.Port))
lc := &tls.Config{Certificates: []tls.Certificate{ce}}
l, lerr = tls.Listen("tcp", la, lc)
listenAddr := net.JoinHostPort(config.Listener.ListenerTLS.Addr, fmt.Sprintf("%d", config.Listener.ListenerTLS.Port))
listenConf := &tls.Config{Certificates: []tls.Certificate{cert}}
listener, listenerErr = tls.Listen("tcp", listenAddr, listenConf)
default:
return nil, fmt.Errorf("failed to initialize listener: unknown listener type in config")
}
if lerr != nil {
return nil, fmt.Errorf("failed to initialize listener: %w", lerr)
if listenerErr != nil {
return nil, fmt.Errorf("failed to initialize listener: %w", listenerErr)
}
return l, nil
return listener, nil
}
// UnmarshalString satisfies the fig.StringUnmarshaler interface for the ListenerType type
func (l *ListenerType) UnmarshalString(v string) error {
switch strings.ToLower(v) {
func (l *ListenerType) UnmarshalString(value string) error {
switch strings.ToLower(value) {
case "unix":
*l = ListenerUnix
case "tcp":
@ -69,7 +70,7 @@ func (l *ListenerType) UnmarshalString(v string) error {
case "tls":
*l = ListenerTLS
default:
return fmt.Errorf("unknown listener type: %s", v)
return fmt.Errorf("unknown listener type: %s", value)
}
return nil
}

View file

@ -34,29 +34,29 @@ type File struct {
//
// If any of the required configuration parameters are missing or invalid, an error
// is returned.
func (f *File) Config(cm map[string]any) error {
if cm["file"] == nil {
func (f *File) Config(configMap map[string]any) error {
if configMap["file"] == nil {
return nil
}
c, ok := cm["file"].(map[string]any)
config, ok := configMap["file"].(map[string]any)
if !ok {
return fmt.Errorf("missing configuration for file action")
}
f.Enabled = true
fp, ok := c["output_filepath"].(string)
if !ok || fp == "" {
filePath, ok := config["output_filepath"].(string)
if !ok || filePath == "" {
return fmt.Errorf("no output_filename configured for file action")
}
f.FilePath = fp
f.FilePath = filePath
ot, ok := c["output_template"].(string)
if !ok || ot == "" {
outputTpl, ok := config["output_template"].(string)
if !ok || outputTpl == "" {
return fmt.Errorf("not output_template configured for file action")
}
f.OutputTemplate = ot
f.OutputTemplate = outputTpl
if ow, ok := c["overwrite"].(bool); ok && ow {
if hasOverwrite, ok := config["overwrite"].(bool); ok && hasOverwrite {
f.Overwrite = true
}
@ -65,34 +65,34 @@ func (f *File) Config(cm map[string]any) error {
// Process satisfies the plugins.Action interface for the File type
// It takes in the log message (lm), match groups (mg), and configuration map (cm).
func (f *File) Process(lm parsesyslog.LogMsg, mg []string) error {
func (f *File) Process(logMessage parsesyslog.LogMsg, matchGroup []string) error {
if !f.Enabled {
return nil
}
of := os.O_APPEND | os.O_CREATE | os.O_WRONLY
openFlags := os.O_APPEND | os.O_CREATE | os.O_WRONLY
if f.Overwrite {
of = os.O_TRUNC | os.O_CREATE | os.O_WRONLY
openFlags = os.O_TRUNC | os.O_CREATE | os.O_WRONLY
}
fh, err := os.OpenFile(f.FilePath, of, 0o600)
fileHandle, err := os.OpenFile(f.FilePath, openFlags, 0o600)
if err != nil {
return fmt.Errorf("failed to open file for writing in file action: %w", err)
}
defer func() {
_ = fh.Close()
_ = fileHandle.Close()
}()
t, err := template.Compile(lm, mg, f.OutputTemplate)
tpl, err := template.Compile(logMessage, matchGroup, f.OutputTemplate)
if err != nil {
return err
}
_, err = fh.WriteString(t)
_, err = fileHandle.WriteString(tpl)
if err != nil {
return fmt.Errorf("failed to write log message to file %q: %w",
f.FilePath, err)
}
if err = fh.Sync(); err != nil {
if err = fileHandle.Sync(); err != nil {
return fmt.Errorf("failed to sync memory to file %q: %w",
f.FilePath, err)
}

30
rule.go
View file

@ -32,28 +32,28 @@ type Rule struct {
// existence, and loads the Ruleset using the fig library.
// It checks for duplicate rules and returns an error if any duplicates are found.
// If all operations are successful, it returns the created Ruleset and no error.
func NewRuleset(c *Config) (*Ruleset, error) {
rs := &Ruleset{}
p := filepath.Dir(c.Server.RuleFile)
f := filepath.Base(c.Server.RuleFile)
_, err := os.Stat(fmt.Sprintf("%s/%s", p, f))
func NewRuleset(config *Config) (*Ruleset, error) {
ruleset := &Ruleset{}
path := filepath.Dir(config.Server.RuleFile)
file := filepath.Base(config.Server.RuleFile)
_, err := os.Stat(fmt.Sprintf("%s/%s", path, file))
if err != nil {
return rs, fmt.Errorf("failed to read config: %w", err)
return ruleset, fmt.Errorf("failed to read config: %w", err)
}
if err = fig.Load(rs, fig.Dirs(p), fig.File(f), fig.UseStrict()); err != nil {
return rs, fmt.Errorf("failed to load ruleset: %w", err)
if err = fig.Load(ruleset, fig.Dirs(path), fig.File(file), fig.UseStrict()); err != nil {
return ruleset, fmt.Errorf("failed to load ruleset: %w", err)
}
rna := make([]string, 0)
for _, r := range rs.Rule {
for _, rn := range rna {
if strings.EqualFold(r.ID, rn) {
return nil, fmt.Errorf("duplicate rule found: %s", r.ID)
rules := make([]string, 0)
for _, rule := range ruleset.Rule {
for _, rulename := range rules {
if strings.EqualFold(rule.ID, rulename) {
return nil, fmt.Errorf("duplicate rule found: %s", rule.ID)
}
}
rna = append(rna, r.ID)
rules = append(rules, rule.ID)
}
return rs, nil
return ruleset, nil
}

122
server.go
View file

@ -45,62 +45,62 @@ type Server struct {
}
// New creates a new instance of Server based on the provided Config
func New(c *Config) (*Server, error) {
s := &Server{
conf: c,
func New(config *Config) (*Server, error) {
server := &Server{
conf: config,
}
s.setLogLevel()
server.setLogLevel()
if err := s.setRules(); err != nil {
return s, err
if err := server.setRules(); err != nil {
return server, err
}
p, err := parsesyslog.New(s.conf.internal.ParserType)
parser, err := parsesyslog.New(server.conf.internal.ParserType)
if err != nil {
return s, fmt.Errorf("failed to initialize syslog parser: %w", err)
return server, fmt.Errorf("failed to initialize syslog parser: %w", err)
}
s.parser = p
server.parser = parser
if len(actions.Actions) <= 0 {
return s, fmt.Errorf("no action plugins found/configured")
return server, fmt.Errorf("no action plugins found/configured")
}
return s, nil
return server, nil
}
// Run starts the logranger Server by creating a new listener using the NewListener
// method and calling RunWithListener with the obtained listener.
func (s *Server) Run() error {
l, err := NewListener(s.conf)
listener, err := NewListener(s.conf)
if err != nil {
return err
}
return s.RunWithListener(l)
return s.RunWithListener(listener)
}
// RunWithListener sets the listener for the server and performs some additional
// tasks for initializing the server. It creates a PID file, writes the process ID
// to the file, and listens for connections. It returns an error if any of the
// initialization steps fail.
func (s *Server) RunWithListener(l net.Listener) error {
s.listener = l
func (s *Server) RunWithListener(listener net.Listener) error {
s.listener = listener
// Create PID file
pf, err := os.Create(s.conf.Server.PIDFile)
pidFile, err := os.Create(s.conf.Server.PIDFile)
if err != nil {
s.log.Error("failed to create PID file", LogErrKey, err)
os.Exit(1)
}
pid := os.Getpid()
s.log.Debug("creating PID file", slog.String("pid_file", pf.Name()),
s.log.Debug("creating PID file", slog.String("pid_file", pidFile.Name()),
slog.Int("pid", pid))
_, err = pf.WriteString(fmt.Sprintf("%d", pid))
_, err = pidFile.WriteString(fmt.Sprintf("%d", pid))
if err != nil {
s.log.Error("failed to write PID to PID file", LogErrKey, err)
_ = pf.Close()
_ = pidFile.Close()
}
if err = pf.Close(); err != nil {
if err = pidFile.Close(); err != nil {
s.log.Error("failed to close PID file", LogErrKey, err)
}
@ -116,47 +116,47 @@ func (s *Server) Listen() {
defer s.wg.Done()
s.log.Info("listening for new connections", slog.String("listen_addr", s.listener.Addr().String()))
for {
c, err := s.listener.Accept()
acceptConn, err := s.listener.Accept()
if err != nil {
s.log.Error("failed to accept new connection", LogErrKey, err)
continue
}
s.log.Debug("accepted new connection",
slog.String("remote_addr", c.RemoteAddr().String()))
conn := NewConnection(c)
slog.String("remote_addr", acceptConn.RemoteAddr().String()))
connection := NewConnection(acceptConn)
s.wg.Add(1)
go func(co *Connection) {
s.HandleConnection(co)
s.wg.Done()
}(conn)
}(connection)
}
}
// HandleConnection handles a single connection by parsing and processing log messages.
// It logs debug information about the connection and measures the processing time.
// It closes the connection when done, and logs any error encountered during the process.
func (s *Server) HandleConnection(c *Connection) {
func (s *Server) HandleConnection(connection *Connection) {
defer func() {
if err := c.conn.Close(); err != nil {
if err := connection.conn.Close(); err != nil {
s.log.Error("failed to close connection", LogErrKey, err)
}
}()
ReadLoop:
for {
if err := c.conn.SetDeadline(time.Now().Add(s.conf.Parser.Timeout)); err != nil {
if err := connection.conn.SetDeadline(time.Now().Add(s.conf.Parser.Timeout)); err != nil {
s.log.Error("failed to set processing deadline", LogErrKey, err,
slog.Duration("timeout", s.conf.Parser.Timeout))
return
}
lm, err := s.parser.ParseReader(c.rb)
logMessage, err := s.parser.ParseReader(connection.rb)
if err != nil {
var ne *net.OpError
var netErr *net.OpError
switch {
case errors.As(err, &ne):
case errors.As(err, &netErr):
if s.conf.Log.Extended {
s.log.Error("network error while processing message", LogErrKey,
ne.Error())
netErr.Error())
}
return
case errors.Is(err, io.EOF):
@ -172,7 +172,7 @@ ReadLoop:
}
}
s.wg.Add(1)
go s.processMessage(lm)
go s.processMessage(logMessage)
}
}
@ -182,36 +182,36 @@ ReadLoop:
// The method first checks if the ruleset is not nil. If it is nil, no actions will be
// executed. For each rule in the ruleset, it checks if the log message matches the
// rule's regular expression.
func (s *Server) processMessage(lm parsesyslog.LogMsg) {
func (s *Server) processMessage(logMessage parsesyslog.LogMsg) {
defer s.wg.Done()
if s.ruleset != nil {
for _, r := range s.ruleset.Rule {
if !r.Regexp.MatchString(lm.Message.String()) {
for _, rule := range s.ruleset.Rule {
if !rule.Regexp.MatchString(logMessage.Message.String()) {
continue
}
if r.HostMatch != nil && !r.HostMatch.MatchString(lm.Hostname) {
if rule.HostMatch != nil && !rule.HostMatch.MatchString(logMessage.Hostname) {
continue
}
mg := r.Regexp.FindStringSubmatch(lm.Message.String())
for n, a := range actions.Actions {
bt := time.Now()
if err := a.Config(r.Actions); err != nil {
matchGroup := rule.Regexp.FindStringSubmatch(logMessage.Message.String())
for name, action := range actions.Actions {
startTime := time.Now()
if err := action.Config(rule.Actions); err != nil {
s.log.Error("failed to config action", LogErrKey, err,
slog.String("action", n), slog.String("rule_id", r.ID))
slog.String("action", name), slog.String("rule_id", rule.ID))
continue
}
s.log.Debug("log message matches rule, executing action",
slog.String("action", n), slog.String("rule_id", r.ID))
if err := a.Process(lm, mg); err != nil {
slog.String("action", name), slog.String("rule_id", rule.ID))
if err := action.Process(logMessage, matchGroup); err != nil {
s.log.Error("failed to process action", LogErrKey, err,
slog.String("action", n), slog.String("rule_id", r.ID))
slog.String("action", name), slog.String("rule_id", rule.ID))
}
if s.conf.Log.Extended {
pt := time.Since(bt)
procTime := time.Since(startTime)
s.log.Debug("action processing benchmark",
slog.Duration("processing_time", pt),
slog.String("processing_time_human", pt.String()),
slog.String("action", n), slog.String("rule_id", r.ID))
slog.Duration("processing_time", procTime),
slog.String("processing_time_human", procTime.String()),
slog.String("action", name), slog.String("rule_id", rule.ID))
}
}
}
@ -226,21 +226,21 @@ func (s *Server) processMessage(lm parsesyslog.LogMsg) {
// Finally, it creates a new `slog.Logger` with the JSON handler and sets the `s.log` field
// of the `Server` struct to the logger, with a context value of "logranger".
func (s *Server) setLogLevel() {
lo := slog.HandlerOptions{}
logOpts := slog.HandlerOptions{}
switch strings.ToLower(s.conf.Log.Level) {
case "debug":
lo.Level = slog.LevelDebug
logOpts.Level = slog.LevelDebug
case "info":
lo.Level = slog.LevelInfo
logOpts.Level = slog.LevelInfo
case "warn":
lo.Level = slog.LevelWarn
logOpts.Level = slog.LevelWarn
case "error":
lo.Level = slog.LevelError
logOpts.Level = slog.LevelError
default:
lo.Level = slog.LevelInfo
logOpts.Level = slog.LevelInfo
}
lh := slog.NewJSONHandler(os.Stdout, &lo)
s.log = slog.New(lh).With(slog.String("context", "logranger"))
logHandler := slog.NewJSONHandler(os.Stdout, &logOpts)
s.log = slog.New(logHandler).With(slog.String("context", "logranger"))
}
// setRules initializes/updates the ruleset for the logranger Server by
@ -248,11 +248,11 @@ func (s *Server) setLogLevel() {
// to the Server's ruleset field.
// It returns an error if there is a failure in reading or loading the ruleset.
func (s *Server) setRules() error {
rs, err := NewRuleset(s.conf)
ruleset, err := NewRuleset(s.conf)
if err != nil {
return fmt.Errorf("failed to read ruleset: %w", err)
}
s.ruleset = rs
s.ruleset = ruleset
return nil
}
@ -261,12 +261,12 @@ func (s *Server) setRules() error {
// It creates a new Config using the NewConfig method and updates the Server's
// conf field. It also reloads the configured Ruleset.
// If an error occurs while reloading the configuration, an error is returned.
func (s *Server) ReloadConfig(p, f string) error {
c, err := NewConfig(p, f)
func (s *Server) ReloadConfig(path, file string) error {
config, err := NewConfig(path, file)
if err != nil {
return fmt.Errorf("failed to reload config: %w", err)
}
s.conf = c
s.conf = config
if err := s.setRules(); err != nil {
return fmt.Errorf("failed to reload ruleset: %w", err)

View file

@ -44,99 +44,99 @@ type FuncMap struct{}
// the FuncMap. It then populates a map with values from the LogMsg
// and current time and executes the template using the map as the
// data source. The compiled template result or an error is returned.
func Compile(lm parsesyslog.LogMsg, mg []string, ot string) (string, error) {
pt := strings.Builder{}
fm := NewTemplateFuncMap()
func Compile(logMessage parsesyslog.LogMsg, matchGroup []string, outputTpl string) (string, error) {
procText := strings.Builder{}
funcMap := NewTemplateFuncMap()
ot = strings.ReplaceAll(ot, `\n`, "\n")
ot = strings.ReplaceAll(ot, `\t`, "\t")
ot = strings.ReplaceAll(ot, `\r`, "\r")
tpl, err := template.New("template").Funcs(fm).Parse(ot)
outputTpl = strings.ReplaceAll(outputTpl, `\n`, "\n")
outputTpl = strings.ReplaceAll(outputTpl, `\t`, "\t")
outputTpl = strings.ReplaceAll(outputTpl, `\r`, "\r")
tpl, err := template.New("template").Funcs(funcMap).Parse(outputTpl)
if err != nil {
return pt.String(), fmt.Errorf("failed to create template: %w", err)
return procText.String(), fmt.Errorf("failed to create template: %w", err)
}
dm := make(map[string]any)
dm["match"] = mg
dm["hostname"] = lm.Hostname
dm["timestamp"] = lm.Timestamp
dm["now_rfc3339"] = time.Now().Format(time.RFC3339)
dm["now_unix"] = time.Now().Unix()
dm["severity"] = lm.Severity.String()
dm["facility"] = lm.Facility.String()
dm["appname"] = lm.AppName
dm["original_message"] = lm.Message
dataMap := make(map[string]any)
dataMap["match"] = matchGroup
dataMap["hostname"] = logMessage.Hostname
dataMap["timestamp"] = logMessage.Timestamp
dataMap["now_rfc3339"] = time.Now().Format(time.RFC3339)
dataMap["now_unix"] = time.Now().Unix()
dataMap["severity"] = logMessage.Severity.String()
dataMap["facility"] = logMessage.Facility.String()
dataMap["appname"] = logMessage.AppName
dataMap["original_message"] = logMessage.Message
if err = tpl.Execute(&pt, dm); err != nil {
return pt.String(), fmt.Errorf("failed to compile template: %w", err)
if err = tpl.Execute(&procText, dataMap); err != nil {
return procText.String(), fmt.Errorf("failed to compile template: %w", err)
}
return pt.String(), nil
return procText.String(), nil
}
// NewTemplateFuncMap creates a new template function map by returning a
// template.FuncMap.
func NewTemplateFuncMap() template.FuncMap {
fm := FuncMap{}
funcMap := FuncMap{}
return template.FuncMap{
"_ToLower": fm.ToLower,
"_ToUpper": fm.ToUpper,
"_ToBase64": fm.ToBase64,
"_ToSHA1": fm.ToSHA1,
"_ToSHA256": fm.ToSHA256,
"_ToSHA512": fm.ToSHA512,
"_ToLower": funcMap.ToLower,
"_ToUpper": funcMap.ToUpper,
"_ToBase64": funcMap.ToBase64,
"_ToSHA1": funcMap.ToSHA1,
"_ToSHA256": funcMap.ToSHA256,
"_ToSHA512": funcMap.ToSHA512,
}
}
// ToLower returns a given string as lower-case representation
func (*FuncMap) ToLower(s string) string {
return strings.ToLower(s)
func (*FuncMap) ToLower(value string) string {
return strings.ToLower(value)
}
// ToUpper returns a given string as upper-case representation
func (*FuncMap) ToUpper(s string) string {
return strings.ToUpper(s)
func (*FuncMap) ToUpper(value string) string {
return strings.ToUpper(value)
}
// ToBase64 returns the base64 encoding of a given string.
func (*FuncMap) ToBase64(s string) string {
return base64.RawStdEncoding.EncodeToString([]byte(s))
func (*FuncMap) ToBase64(value string) string {
return base64.RawStdEncoding.EncodeToString([]byte(value))
}
// ToSHA1 returns the SHA-1 hash of the given string
func (*FuncMap) ToSHA1(s string) string {
return toSHA(s, SHA1)
func (*FuncMap) ToSHA1(value string) string {
return toSHA(value, SHA1)
}
// ToSHA256 returns the SHA-256 hash of the given string
func (*FuncMap) ToSHA256(s string) string {
return toSHA(s, SHA256)
func (*FuncMap) ToSHA256(value string) string {
return toSHA(value, SHA256)
}
// ToSHA512 returns the SHA-512 hash of the given string
func (*FuncMap) ToSHA512(s string) string {
return toSHA(s, SHA512)
func (*FuncMap) ToSHA512(value string) string {
return toSHA(value, SHA512)
}
// toSHA is a function that converts a string to a SHA hash.
//
// The function takes two parameters: a string 's' and a 'sa' of
// type SHAAlgo which defines the SHA algorithm to be used.
func toSHA(s string, sa SHAAlgo) string {
var h hash.Hash
switch sa {
func toSHA(value string, algo SHAAlgo) string {
var dataHash hash.Hash
switch algo {
case SHA1:
h = sha1.New()
dataHash = sha1.New()
case SHA256:
h = sha256.New()
dataHash = sha256.New()
case SHA512:
h = sha512.New()
dataHash = sha512.New()
default:
return ""
}
_, err := io.WriteString(h, s)
_, err := io.WriteString(dataHash, value)
if err != nil {
return ""
}
return fmt.Sprintf("%x", h.Sum(nil))
return fmt.Sprintf("%x", dataHash.Sum(nil))
}