spq / pkappa2 Goto Github PK
View Code? Open in Web Editor NEWNetwork traffic analysis tool for Attack & Defense CTF's
License: Apache License 2.0
Network traffic analysis tool for Attack & Defense CTF's
License: Apache License 2.0
Firefox asked me to kill a tab when inspecting a bigger payload (2kb iirc)
When missing the +x flag for the -base_dir
folder, no files can be created and uploading a pcap sends a response of File already exists
instead of some other more descriptive error message.
pkappa2 refuses to start if the -base_dir isn't readable, so checking this on start might be a good place.
When importing a pcap file, allow to specify which group it belongs to. Challenges can be separated this way.
Groups have their own indexes & snapshots and may only be combined with packets in the same group.
This might improve import speed.
this allows multiple tools to use the same source easily and the packets are streamed live.
as a follow up, maybe support ssl wrapped pcap-over-ip to make it more secure and to support authentication.
When selecting everything with ctrl+a, restrict the selection to the stream data instead of the metadata as well.
The details about the selected stream shown above the stream data don't show the generated marks added to the stream.
The grammar doesn't allow whitespace at the beginning and expects it to start with something else. It should allow whitespace.
tag:
doesn't trigger the auto complete menu.
Instead of having to manually add those tags with a nice color, have a convenience option to add them given a flag regex.
Add an option to run arbitrary programs after a new pcap was imported. The program could get the path to the imported .pcap file as an argument. This would be useful for external tools to analyze the pcap and add additional info to the streams, but only after pkappa2 knows about the streams itself.
We could add another pcap_postprocessor
directory containing all the programs that should be run after the importPcapJob
job.
filtering for streams not containing a particular string is broken since converter support was merged.
a simple query like
id:123 -cdata:foo
will return the stream with id 123 even if it contains the string foo.
the reason is that a converter that has nothing to do with the given stream will not have already run on the stream.
Thus, an empty stream is used when searching, which matches the filter condition of not containing the string "foo".
There should be a distinction between an actual empty stream and a stream that has not (yet) been converted.
This will fix this and other issues.
This alone will however not fix the logic related to converters:
the current implementation works in a way that can be expressed like this:
id:123 AND ( -cdata.conv1:foo OR -cdata.conv2:foo OR -cdata.conv3:foo )
however the implementation should work in a way like this:
id:123 AND -( cdata.conv1:foo OR cdata.conv2:foo OR cdata.conv3:foo )
The graph view isn't intuitive to use.
Non aggregated graphs might give better insights of outliers in the history. Requires limiting the amount of data to display like in #62 when displaying every single stream.
When selecting a converter as a one-off view of a single stream, display that converter view again when navigating away and back to the same stream. Make sure to handle disappearing of the converter in the meantime.
with some changes to the project in the recent time, the (web) formatting settings were changed.
This results in auto formatting touching "every second line", which then results in not using autoformatting anymore.
We should either return to the formatting configuration from before the change or format every file according to the new settings.
Allow to search more intuitively by not requiring to learn the query language to do a quick regex search in the pcaps.
If parts of the query are valid syntax, the query should still fail to catch typos. But queries like GET /bla
should silently be treated like data:"GET /bla"
.
It's annoying to have to delete and add the tag manually when you want to change its query. Maybe just some UI sugar to do the deleting for you transparently and using the old color could work? That might not handle cases where other tags reference the one you want to edit.
Notify clients when a tag was added/removed or search result is outdated etc.
When selecting bytes in the output and pressing Search Selection
, the selection is one too large.
Selecting /bla
in GET /bla HTTP/1.1
would cause the query to include the space beforehand cdata:"\x{20}\x{2F}bla"
.
Printable characters should be inserted plain without \x{XX}
for readability too.
Add tests for the backend and maybe for the frontend.
We'd need to generate pcaps in CI or include some pcaps in the repo where we know which data to expect. Write queries selecting all kinds of streams and verify we find all the streams we expect.
When uploading an empty or otherwise invalid .pcap file, pkappa2 will try to parse it over and over, creating and deleting a state.json file in the process.
Run yarn lint
and yarn format
in a pre-commit hook before commiting changes locally. This prevents accidentally pushing unformatted code.
A marker named look here
would lead to a query like mark:look here
when clicked on in the menu on the left. This throws a syntax error 1:11: invalid input text "here"
.
Switching the query up to include quotes around the tag name works: mark:"look here"
Vue 3 was release in Februar 2022.
Migration was blocked by Vuetify only supporting Vue 2.
Vuetify 3 was released 1. November 2022, so we should try to migrate now.
Release v3.0.0 · vuetifyjs/vuetify
Breaking Changes | Vue 3 Migration Guide
The new default state management library for Vue 3 is Pinia:
Home | Pinia
When possible Typescript support should be enabled without much additional trouble:
Using Vue with TypeScript | Vue.js
When looking for "AAAA" highlight all occurences of "AAA" in the stream data view.
Maybe allow to skip/scroll to the next occurence would be useful too.
When accidentally staring a dumb query which takes ages, don't waste resources but allow to abort the lookup before it's done.
Allow to download the stream as pcap from the search result page without viewing its details.
When tagging streams using e.g. Suricata the tags pollute the Marks
list and manually created ones for interesting traffic get lost.
Maybe we can introduce yet another tag type for external tags which is folded/hidden by default in the left bar, but still show up in the results?
Allow to preview HTML responses right in the browser. Make sure to sandbox it appropriately.
This might need a redesign of the streamdata view where every chunk gets their own buttons to change their appearance instead of changing the appearance of the whole stream at once.
Instead of starting pkappa with ./pkappa2 -base_dir /some/path
you could do it with PKAPPA2_BASE_DIR=/some/path ./pkappa2
. The Docker setup is easier this way.
The stream count is higher in the status/stats than the actual searchable stream count.
The problem occurs when a stream is split between multiple pcaps. When more packets are added to an existing stream upon arrival of a new pcap, the stream is present in multiple indexes and thus counted multiple times.
When executing certain queries the server (even when having very high specs at hand) becomes unresponsive and is overloaded.
Proposed Solution: Cap execution time and / or resource usage for each query and add a way to abort an query
Allow to limit the search to recent events without always having to type it manually in the query.
I wish that when I press ctrl+r somewhere on the page, I could use it to search backwards in my previous search queries.
Just like in bash/zsh or other common shells and terminals.
Trying to filter by IP addresses doesn't work and always yields an empty result.
You cannot see how much time went by between a request and response. This is relevant for race conditions of multiple streams.
flower shows the delta of the next chunk
When a stream is converted, the output is cached. If the stream receives more packets in a new pcap, the longer stream is not converted again resulting in outdated converter output.
We need to invalidate the cache and rerun the converter when a stream is updated.
pkappa2/internal/index/manager/manager.go
Lines 502 to 503 in c1e3a3f
Instead of downloading a pcap just download the raw binary blob of the traffic.
Selecting which side of the traffic client/server/both to export or even selecting per chunk seems useful.
Currently only the tag query is saved and the query is ran again on startup to find all the matches which can take a long time. Since we ran the query before, we could store the matching stream ids somewhere and load them on startup instead of querying again.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.