-
Notifications
You must be signed in to change notification settings - Fork 508
[WithSecure] Added the WithSecure Elements integration for collecting security incident and events #15442
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…idents and events
|
💚 CLA has been signed |
|
I have signed the Contributor Agreement as requested. |
- Fix API request limits (200 -> 50) for incidents and incident_detections - Fix security_events collection by switching from POST to GET method - Add cursor-based deduplication for incidents (updatedTimestampStart) - Add cursor-based deduplication for security_events (persistenceTimestampStart) - Add pagination support with nextAnchor for all data streams - Fix pagination errors with conditional nextAnchor check - Add all engine groups (epp, edr, ecp, xm) to security_events - Increase initial lookback to 30 days for security_events - Simplify security_events ingest pipeline with error handling - Add enable_request_tracer option for debugging - Disable incident_detections by default (requires incident_id) - Add archived=false filter to incidents data stream
…eams (v1.0.3) This release fixes critical issues with field extraction in both security_events and incidents data streams, and adjusts the initial data collection window. Major fixes: - Fixed security_events ingest pipeline to properly decode JSON from message field - Fixed incidents ingest pipeline to properly decode JSON from message field - Resolved issue where message field contained entire JSON instead of individual fields - All event fields are now properly extracted and mapped to withsecure namespace Security Events improvements: - Added JSON decoding processor to extract fields from event.original - Proper extraction of id, action, severity, engine, details, device, organization - Added event_transaction_id field mapping - Added timestamp fields: server_timestamp, persistence_timestamp, client_timestamp - Changed initial lookback window from 30 days to 7 days Incidents improvements: - Added JSON decoding processor to extract fields from event.original - Proper extraction of incidentId, status, severity, categories, sources, etc. - Added withsecure.incident.id field mapping - Changed initial lookback window from 24 hours to 7 days Technical details: - Pipeline now copies message to event.original when preserve_original_event is enabled - JSON is decoded and fields are extracted to root context using Painless script - Temporary fields are cleaned up after extraction - Both data streams now have consistent 7-day initial collection period Version: 1.0.3
|
Hi @andrewkroh, could you please take a look on this integration ? |
|
Pinging @elastic/security-service-integrations (Team:Security-Service Integrations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a valid configuration. The documentation for this file is here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've fixed the test configuration in commit 9429cc3.
Changes made:
- Simplified
test-common-config.ymlto minimal valid configuration - Converted all test input files to proper array format with
eventsstructure - Added corresponding
-expected.jsonfiles withexpectedarray structure - All test files now comply with elastic-package validation requirements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest making this an array of events as described here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest using the CEL input instead of HTTP JSON.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the suggestion
You're absolutely right that CEL is the better choice for new integrations. I initially went with httpjson because I was more familiar with it, but I understand that CEL is now the recommended approach and offers several advantages
I suggest doing the initial integration with httpjson and planning a migration to CEL in a future version. It is a significant amount of work.
Would that work for you?
Could you point me to good reference integrations that use CEL with OAuth2 and pagination that I could learn from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ingest pipeline will need improved error handling. Please take a look at some examples of ingest pipelines elsewhere in the repository. This document is also worth reading.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've significantly improved the ingest pipelines in commit 94b30b7.
Enhancements made for all data streams (incidents, security_events, incident_detections):
-
JSON Decoding:
- Decode JSON from
messagefield orevent.original - Use intermediate
_tempfield to avoid conflicts - Proper
ignore_failure: trueflags
- Decode JSON from
-
Processor Tags & Descriptions:
- Added unique tags (
json_decode,move_json_fields,set_timestamp) - Descriptive comments for debugging
- Added unique tags (
-
Error Handling (
on_failureblock):on_failure: - set: field: event.kind value: pipeline_error - append: field: error.message value: 'Processor {{{_ingest.on_failure_processor_type}}} with tag {{{_ingest.on_failure_processor_tag}}} in pipeline {{{_ingest.on_failure_pipeline}}} failed with message: {{{_ingest.on_failure_message}}}'
Simplified changelog as requested in review feedback from @efd6. Changed to single version 0.1.0 with description 'Initial release.' and link to PR elastic#15442.
Fixed test configuration files to comply with elastic-package validation. Converted test files to proper format with 'events' and 'expected' structure. Simplified test-common-config.yml to minimal valid configuration. Addresses review comment from @efd6 about invalid test configuration.
Enhanced all ingest pipelines with JSON decoding from message field, proper on_failure handler setting event.kind=pipeline_error, tags and descriptions on processors, conditional field checks, and fixed @timestamp for incidents to use createdTimestamp instead of ingestion time. Addresses review comment from @efd6 about improved error handling.
README.md
Changelog
Configuration
Required Parameters
url: WithSecure Elements API URLclient_id: OAuth2 Client IDclient_secret: OAuth2 Client Secretorganization_id: Organization identifierOptional Parameters
interval: Collection frequency (default: 5m)preserve_original_event: Keep raw event datapreserve_duplicate_custom_fields: Keep WithSecure fieldsUse Cases
Security Operations
Analytics & Dashboards
Deployment
Prerequisites
Installation
Checklist
Review Notes
Key Files to Review
manifest.yml- Main integration configurationdata_stream/*/agent/stream/httpjson.yml.hbs- API templatesdata_stream/*/elasticsearch/ingest_pipeline/default.yml- Data processingdata_stream/*/fields/*.yml- Field definitionsTesting
_dev/test/pipeline/Screenshots
Ready for Production
This integration is production-ready and follows all Elastic integration standards. It provides comprehensive WithSecure Elements data collection with full ECS compliance and robust error handling.
Perfect for security teams looking to integrate WithSecure Elements data into their Elastic SIEM!