Instead of passing the whole document as one large string, it could be better to provide each page individually, extract the data, and then store it.
We are currently giving it all the data as a concatenated string, which causes issues with the tokens it takes to process the request (due to the string being too long), and some pages provide no relevant information, meaning that it could potentially cause an overload. One of the possibilities we thought about was to pass each page individually, helping the total length by keeping it short and extracting the data from each page with the possibility of discarding useless pages. However, one of the issues that might pop up by using this method is the model modifying correct data. The best solution for this is to get the accuracy scores for each individual page and make the most accurate results the final ones.