-
|
The idea is that I want to add some addition data to codeql database and use these new data as new predicates to enhance the analysis capabilities. I notice the Idea: The data can be generated by compiler or other static analysis tools. And they are convertible to CodeQL classes/predicates through user-defined mapping rules. For example, for AST nodes, the location string can be converted to AST node if user-defined adaptor predicates are satisfied. Additional information can be inserted to the added data to avoid conflicts when it is generated by compiler or other static analysis tools. Take escape analysis in golang as an example. Some variables can be heap allocated decided by compiler. We can dump the definition location, the type and other information about these variables to csv file. When imported by CodeQL, we can define a adaptor using the location string and the type to map csv file to Issue #9758 also reveals similiar problem I think. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
|
Hi @Lslightly 👋🏻
A relatively minimal example of how to use this feature is the following. In your query, you define the external predicate and use it (named external predicate foo(string bar, string baz);
from string a, string b
where foo(a, b)
select a, bYou then create a CSV file with rows for the external predicate, with one column for each parameter. Let's call the following Now you can run the query with |
Beta Was this translation helpful? Give feedback.
Hi @Lslightly 👋🏻
A relatively minimal example of how to use this feature is the following. In your query, you define the external predicate and use it (named
foohere):You then create a CSV file with rows for the external predicate, with one column for each parameter. Let's call the following
test.csv:Now you can run the query with
codeql query run path-to-your-query.ql --external=foo=test.csvwherefoois the name of the external predicate andtest.csvthe name of the CS…