-
Notifications
You must be signed in to change notification settings - Fork 508
[Netskope] Add multiple system tests for Alerts_v2 and Events_v2 data stream #14887
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Netskope] Add multiple system tests for Alerts_v2 and Events_v2 data stream #14887
Conversation
|
Pinging @elastic/security-service-integrations (Team:Security-Service Integrations) |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the status of this? Is it still waiting on support elsewhere?
In #14639 it mentioned that the new functionality was available but then it went back to blocked.
If this isn't ready to for final review and merge, please change it to be a draft.
we couldn’t add multiple system tests at that time, which blocked us. Later, that was resolved and then I raised this PR. However, we’re now running into an issue where the environment lacks GCS credentials, causing the CI tests to fail.
This PR is ready for review. The status is currently blocked due to CI failures, which depend on elastic/elastic-package#2606. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two updates in the issue description would be helpful:
- Add "Requires elastic/elastic-package#2606" to the Related issues section.
- In the "How to test this PR locally" instructions, it would be helpful to mention that cloud credentials need to be set up. For GCS it's
GCLOUD_PROJECT,GOOGLE_CLOUD_KEYFILE_JSON, and for AWS it'sAWS_DEFAULT_PROFILE,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN. Any useful public documentation could be linked (although perhaps the best instructions in this case are not public).
| entrypoint: > | ||
| sh -c " | ||
| sleep 5 && | ||
| gzip -c /sample_logs/test-alerts-v2.csv > /sample_logs/test-alerts-v2.csv.gz && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I can see, for Azure we can gzip in docker like this, and for AWS S3 we can do it in Terraform with content_base64 = base64gzip(file("./files/events.csv")), but for GCS the google_storage_bucket_object resource doesn't have that option, so *.csv.gz files are comitted in the PR.
I think that a similar thing can be done for GCS as for S3, but like this:
content = base64decode(base64gzip(file("./files/events.csv")))
(doing the base64 decode because the only gzip function I see also base64 encodes).
Then the *.csv.gz files can be removed.
If that's impossible for some reason, maybe it's better to switch to only using having the *.gz files committed.
|
|
||
| output "queue_url" { | ||
| output "aws_queue_url" { | ||
| value = aws_sqs_queue.queue.url |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm getting a lot of errors like this:
Error: Reference to undeclared resource
on main.tf line 88, in output "aws_queue_url":
88: value = aws_sqs_queue.queue.url
A managed resource "aws_sqs_queue" "queue" has not been declared in the root module.
I think it's because with the change above...
-resource "aws_sqs_queue" "queue" {
+resource "aws_sqs_queue" "aws_queue" {References like aws_sqs_queue.queue.url need to be rewritten to aws_sqs_queue.aws_queue.url.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After doing some more credential setup this is still a problem.
On main, elastic-package test system -v --data-streams alerts_v2,events_v2 finishes with:
╭──────────┬─────────────┬───────────┬───────────┬────────┬─────────────────╮
│ PACKAGE │ DATA STREAM │ TEST TYPE │ TEST NAME │ RESULT │ TIME ELAPSED │
├──────────┼─────────────┼───────────┼───────────┼────────┼─────────────────┤
│ netskope │ alerts_v2 │ system │ default │ PASS │ 1m42.964409874s │
│ netskope │ events_v2 │ system │ aws-s3 │ PASS │ 1m37.429060583s │
╰──────────┴─────────────┴───────────┴───────────┴────────┴─────────────────╯
and for the PR:
╭──────────┬─────────────┬───────────┬───────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────╮
│ PACKAGE │ DATA STREAM │ TEST TYPE │ TEST NAME │ RESULT │ TIME ELAPSED │
├──────────┼─────────────┼───────────┼───────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────┤
│ netskope │ alerts_v2 │ system │ aws-s3 │ ERROR: could not setup service: Terraform deployer is unhealthy: container (ID: a0a1fe58e208) exited with code 1 │ 40.115532458s │
│ netskope │ alerts_v2 │ system │ azure │ PASS │ 42.256955417s │
│ netskope │ alerts_v2 │ system │ gcs │ ERROR: could not setup service: Terraform deployer is unhealthy: container (ID: c5185def3bbc) exited with code 1 │ 41.080000877s │
│ netskope │ events_v2 │ system │ aws-s3 │ ERROR: could not setup service: Terraform deployer is unhealthy: container (ID: c28e316fb87f) exited with code 1 │ 43.328576143s │
│ netskope │ events_v2 │ system │ azure │ PASS │ 42.343889193s │
│ netskope │ events_v2 │ system │ gcs │ ERROR: could not setup service: Terraform deployer is unhealthy: container (ID: 237e566fe775) exited with code 1 │ 43.275605268s │
╰──────────┴─────────────┴───────────┴───────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────╯
So it breaks AWS S3.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm getting a lot of errors like this:
Error: Reference to undeclared resource on main.tf line 88, in output "aws_queue_url": 88: value = aws_sqs_queue.queue.url A managed resource "aws_sqs_queue" "queue" has not been declared in the root module.I think it's because with the change above...
-resource "aws_sqs_queue" "queue" { +resource "aws_sqs_queue" "aws_queue" {References like
aws_sqs_queue.queue.urlneed to be rewritten toaws_sqs_queue.aws_queue.url.
Sure, let me check this.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits only
packages/netskope/data_stream/events_v2/_dev/deploy/docker/docker-compose.yml
Outdated
Show resolved
Hide resolved
|
/test |
🚀 Benchmarks reportTo see the full report comment with |
💚 Build Succeeded
History
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM and I think Chris' concerns have been addressed.




Proposed commit message
Checklist
changelog.ymlfile.How to test this PR locally
Additionally, the following cloud credentials are required to setup:
AWS:
Related issues