Coder Social home page Coder Social logo

bc2adls's Introduction

Project

This tool is an experiment on Dynamics 365 Business Central with the sole purpose of discovering the possibilities of having data exported to an Azure Data Lake. To see the details of how this tool is supported, please visit the Support page.

Please be aware that, for the forseeable future, this repository will not receive any new updates. However, development is continued in forks of this repository, e.g., Bertverbeek4PS/bc2adls. If you would like to contribute to bc2adls, please do so in one of the active forks.

Introduction

The bc2adls tool is used to export data from Dynamics 365 Business Central (BC) to Azure Data Lake Storage and expose it in the CDM folder format. The components involved are the following,

  • the businessCentral folder holds a BC extension called Azure Data Lake Storage Export (ADLSE) which enables export of incremental data updates to a container on the data lake. The increments are stored in the CDM folder format described by the deltas.cdm.manifest.json manifest.
  • the synapse folder holds the templates needed to create an Azure Synapse pipeline that consolidates the increments into a final data CDM folder.

The following diagram illustrates the flow of data through a usage scenario- the main points being,

  • Incremental update data from BC is moved to Azure Data Lake Storage through the ADLSE extension into the deltas folder.
  • Triggering the Synapse pipeline(s) consolidates the increments into the data folder.
  • The resulting data can be consumed by applications, such as Power BI, in the following ways:
    • CDM: via the data.cdm.manifest.json manifest
    • CSV/Parquet: via the underlying files for each individual entity inside the data folder
    • Spark/SQL: via shared metadata tables

Architecture

More details:

Changelog

This project is no longer receiving new updates. Find a list of all previous updates in the changelog.

Testimonials

Here are a few examples of what our users are saying ...

“After careful consideration we, as Magnus Digital, advised VolkerWessels Telecom, a large Dutch telecom company, to use and exploit the features of BC2ADLS. We see BC2ADLS currently as the only viable way to export data from Business Central to Azure Data Lake at large scale and over multiple administrations within BC. By the good help of Soumya and Henri, we were able to build a modern data warehouse in Azure Synapse with a happy customer as result.”

— Bas Bonekamp, Magnus Digital

“With the bc2adls we have found a way to export huge amount of data from Business Central to a data warehouse solution. This helps us allot to unburden big customers to move to Business Central Online. Also it is easy to use for our customers so they can define their own set of tables and fields and schedule the exports.”

— Bert Verbeek, 4PS

“I can't believe how simple and powerful loading data from Business Central is now. It's like night and day—I'm loving it!”

— Mathias Halkjær Petersen, Fellowmind

“At Kapacity we have utilized the bc2adls tool at several customer projects. These customer cases span from small a project with data extract from 1-3 companies in Dynamics Business Central SaaS (BC) to an enterprise solution with data extract from 150 companies in BC. bc2adls exports multicompany data from BC til Azure Data Lake Storage effectively with incremental updates. The bc2adls extension for BC is easy to configure and maintain. The customer can add new entities (tables and fields) to an existing configuration and even extend the data extract to include new company setup. We have transformed data with the Azure Synapse pipelines using the preconfigured templates from the bc2adls team. The data analyst queries this solution in Power BI using the Shared Metadata db on Serverless SQL. In the enterprise project we did the data transformation using Azure Databricks. Thanks to the bc2adls team providing these tools and great support enabling us to incorporate this tool in our data platform.”

— Jens Ole Taisbak, TwoDay Kapacity

“We have had great success using the BC2ADL tool. It is well thought out and straightforward to implement and configure. The Microsoft team that develops the tool continues to add new features and functionality that has made it a great asset to our clients. We looked to the BC2ADL tool to solve a performance issue in reporting for Business Central. Using the BC2ADL tool along with Synapse Serverless SQL we have been able to remove the primary reporting load from the BC transactional database, which has helped alleviate a bottleneck in the environment. When the BC2ADL tool was updated to export from the replicated BC database we were able to really take full advantage of the process and provide intraday updates of the Azure Data Lake with no noticeable affect on BC performance. The Microsoft team has been extremely helpful and responsive to requests from the community on feature requests and support.”

— Tom Link, Stoneridge Software

Contributing

Please be aware that, for the forseeable future, this repository will not receive any new updates. However, development is continued in forks of this repository, e.g., Bertverbeek4PS/bc2adls. If you would like to contribute to bc2adls, please do so in one of the active forks.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

bc2adls's People

Contributors

bertverbeek4ps avatar duttasoumya avatar henrischulte-ms avatar julianschmidtke avatar microsoft-github-operations[bot] avatar microsoftopensource avatar paulfurlet avatar ronkoppelaar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bc2adls's Issues

Naming of the Deltas

We are planning to start with a small Data Lake: Delta Sync and then small Azure Functions to update a SQL Server. No Synapse Piplines etc.

In Dynamics F&O the Deltas are handled through the ChangeFeed. Each CSV in the Change Feed has a unique name containing the SQL LSN (Logical Sequence Number): https://learn.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/data-entities/azure-data-lake-change-feeds

In this app, deltas are named by a guid. I was wondering if we might add a LSN / Hex Representation of the Timestamps in BC, too. That way we could import Deltas alphabetically.

Timeout database query 30min Business Central.

I need to export a table from Business Central (GLEntry-17), but it currently weighs 23 GB and the query takes more than 30 min because it is canceled automatically, checking the Business Central limits, I see that there are Query limits at the standard level, what could be done to fix this?

https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/operational-limits-online#query-limits

Captura de pantalla 2023-03-29 223910

"An unexpected error occurred after canceling a database command."ADLSE Seek Data"(Report 82560)."Number - OnAfterGetRecord"(Trigger) line 5 - Azure Data Lake Storage Export by The bc2adls team, Microsoft Denmark" ADLSE Seek Data"(Report 82560).GetResult line 4: Azure Data Lake Storage Export by The bc2adls team, Microsoft Denmark"ADLSE Seek Data"(Report 82560).FindRecords line 3 - Azure Data Lake Storage Export by The bc2adls team, Microsoft Denmark"ADLSE Execute"(CodeUnit 82561).ExportTableUpdates line 22 - Azure Data Lake Storage export by The bc2adls team, Microsoft Denmark"ADLSE Execute"(CodeUnit 82561).TryExportTableData line 12 - Azure Data Lake Storage export by The bc2adls team, Microsoft Denmark"ADLSE Execute"(CodeUnit 82561).OnRun(Trigger) line 47 - Azure Data Lake Storage Export by The bc2adls team, Microsoft Denmark"

Last exported state is Failed when exporting many tables

Hello all,

We export a larger number of tables (currently only with a few data). The export seems to be successful, but for some tables the "Last exported state" is displayed as Failed. However, we could see that the data seems to be successfully transferred to the delta folder in ADLS anyway. When we restart the export, the number of failed tables decreases. After 3-4 times all are successful. The message looks like this:

image

Has anyone had experience with this?

Delta Files not being deleted

Hello,

   I am using the BC2DLS process and am having an issue where the Delta files are not being deleted. I am setting the DeleteDeltas flag on all pipeline to true. When I trigger the process all runs as expected, but the folders within the deltas folder are not deleted. I enabled logging on the Activity called from the True condition of IfDeleteDeltas and the log produces this

Name,Category,Status,Error
WarehouseShipmentLine-7321.csv,File,Deleted,File does not exist.

The file within the deltas subfolder is not named WarehouseShipmentLine-7321.csv. I see the delete Deltas activity is passing in the entity name. I assume this is supposed to be deleting the folder WarehouseShipmentLine-7321 but the logging makes it look like it is trying to delete a file with .csv extension. I would be happy to jump on a quick Teams meeting to review this. Thanks

Tom Link

Enabled field in the ADLSE Table not working as expected

Hi,

I've just run a test with a container empty and setup with 3 tables. Only one table had the Enabled column ticked.

When I run the Export I'm expecting that only dat from one table (the one marked as Enabled) will be exported. The Message says "3 our of 3 tables will be exported" and actually 3 tables have been exported.

I suppose that the code for exporting should filter for Enabled = true in order to exclude the table not set as Enabled.
In previous versions there was a button to set On Hold that probably has beedn replaced with this Enabled column.

Please let me know

Thank you
Eclipses

MicrosoftTeams-image

Every file exported file is in CSV

Hello,

I have an Issue with the type of file exported. Every time I export, the file created is in CSV, even when I ask to put it in parquet.

I try to delete all the file, modify the json file with pasting the parquet mode and even deleted the CSV option in all the code but each time it doesn't work.

And when I try to debug it doesn't use or ask which type of file I want.
I think all of this is created by an API and force it to be in a csv type.

Thank you,
Philippe

Why do we export Table and Field IDs?

Other Systems Export Entities by their name only. They would only export ItemledgerEntry.cdm.json instead of ItemLedgerEntry-32.cdm.json

Our BI engineers were wondering why we have to export the id in the name of the Entity. So I'm wondering, too: Wouldn't it be better to create custom Key/Value Attributes (BusinessCentralId) that represent the Id? That way the entities have the same name as the tables in BC.

Truncation and Comparing Records Counts from BC to ADL

Hello,

    We have been having issues with truncation of some data during the export process.  The export reports success, but the delta and data tables do not contain records consistent with what we are seeing in BC. We are trying to develop a process to better identify when this occurs. We noticed the TableInformationCache-8700 is an entity that can be added to the export. We were hoping to use that table to then compare the cached record count with the record count we have in Synapse. 

You do not have sufficient permissions to read from the table."ADLSE Execute"(CodeUnit 66056).ExportTableUpdates line 17 - BC2ADLS "ADLSE ExecuteSSI"(CodeUnit 66056).TryExportTableData line 12 - "ADLSE ExecuteSSI"(CodeUnit 66056).OnRun(Trigger) line 46 -

Is this table able to be exported? We have tried in a newer version of the BC2ADL tool and while it doesn't error, the table also does not seem to initiate staying in the Last Exported State of "Never Run", although it is enabled in the export. Thanks for any help!

Tom

What's the current status?

I would like to know the current status of this project.

Is there any roadmap where we can know how this project is advancing? Do you expect this project to be stable in a near future? Is it reliable to use it in a Production environment nowadays?

Thank you

Enum Ids and Field Captions

Our BI Solution for OnPrem uses Option and Enum IDs to Map Values.
We would like to have a numerical mapping for multi language support, too.

When exporting a Table field of type enum it is exported like

CDN:

{
    "name": "DocumentType-1",
    "dataFormat": "String",
    "appliedTraits": [],
    "displayName": "Document Type"
},

CSV:

DocumentType-1
--
Quote
Quote
Order

I would like to have the following information:

  1. The available Field Translations
  2. A Mapping from Enum Id to Enum Name

I tried to export Metadata Tables like Table Metadata but that only gives me only table but no field information.

Pipeline fails if there is no new data in delta folder

Hi,

In a scenario where there are a lot of tables to be exported having syncrhonization in BC and Pipelines in Synapse scheduled at regular interval can happen that for some tables there will be no new records, that means no new delta.

With the current version the Pipelines fail.

image

Error Messages:
Operation on target FailIfEntityNotFound failed: Entity with name Customer-18 was found in the deltas.manifest.cdm.json but no directory with that name was found in businesscentralgb/deltas/. This may be the case if no new deltas have been exported.

Operation on target FailIfEntityNotFound failed: Entity with name LSCStore-99001470 was found in the deltas.manifest.cdm.json but no directory with that name was found in businesscentralgb/deltas/. This may be the case if no new deltas have been exported.

Operation on target FailIfEntityNotFound failed: Entity with name GLAccount-15 was found in the deltas.manifest.cdm.json but no directory with that name was found in businesscentralgb/deltas/. This may be the case if no new deltas have been exported.

Operation on target For each entity failed: Activity failed because an inner activity failed; Inner activity name: ConsolidateNewDeltas, Error: Operation on target FailIfEntityNotFound failed: Entity with name GLAccount-15 was found in the deltas.manifest.cdm.json but no directory with that name was found in businesscentralgb/deltas/. This may be the case if no new deltas have been exported.

In my opinion the system should not return a failure but rather an info or warning.

Thank you
Eclipses

Export to OneLake (Fabric)

Currently MS is making a lot of noice regarding Fabric. A complete SAAS solution for all BI related stuff. Fundament of the Fabric solution is the OneLake. I guess technically a DataLake (vNext) storage account.
Having the BC extension able to push data directly in OnleLake would be a great improvement to the extension.

Within our organisation the BI team is investigating Fabric in more detail. I guess they will use custom code now to get BC data into the OneLake. Or use a dedicated DataLake to store the data first.

Runtime Error if a field is not enabled

Szenario:
We have a table that has a field with enabled = false. When we use the action "Activate all valid fields" this field is activated.
When trying to export the table we receive a runtime error that AddLoadFields is not Supported for this field.

Expected:
Like Flowfields or obsolete Fields, fields that are not enabled cannot be exported and should not be activated for export.

Failing Table Export After BC Upgrade and Extension Install

Hello,

 We are exporting 29 tables from BC. After BC was upgraded over the weekend, we had to re-install the extension. Most of the tables continue to export cleanly. 1 table (Customer-18) is failing with the error below. I have tried resetting the table, I have removed it entirely from the export, re-added and re-selected columns and it continues to fail. It seems to be failing at different times in the export. I have seen file sizes in the delta of 44mb, 55mb and 179mb prior to it failing. Prior to the upgrade the full delta file was close to 700mb. Thanks for any help.

The operation was canceled. "ADLSE HttpSSI"(CodeUnit 66059).InvokeRestApi - BC2ADLS by Stoneridge Software, LLC
"ADLSE HttpSSI"(CodeUnit 66059).InvokeRestApi line 4 - BC2ADLS by Stoneridge Software, LLC
"ADLSE Gen 2 UtilSSI"(CodeUnit 66058).AddBlockToDataBlob line 13 - BC2ADLS by Stoneridge Software, LLC
"ADLSE CommunicationSSI"(CodeUnit 66054).FlushPayload line 16 - BC2ADLS by Stoneridge Software, LLC
"ADLSE CommunicationSSI"(CodeUnit 66054).CollectAndSendRecord line 14 - BC2ADLS by Stoneridge Software, LLC
"ADLSE CommunicationSSI"(CodeUnit 66054).TryCollectAndSendRecord line 3 - BC2ADLS by Stoneridge Software, LLC
"ADLSE ExecuteSSI"(CodeUnit 66056).ExportTableUpdates line 27 - BC2ADLS by Stoneridge Software, LLC
"ADLSE ExecuteSSI"(CodeUnit 66056).TryExportTableData line 12 - BC2ADLS by Stoneridge Software, LLC
"ADLSE ExecuteSSI"(CodeUnit 66056).OnRun(Trigger) line 44 - BC2ADLS by Stoneridge Software, LLC
"ADLSE ExecuteSSI"(CodeUnit 66056).ExportTableUpdates line 31 - BC2ADLS by Stoneridge Software, LLC"ADLSE ExecuteSSI"(CodeUnit 66056).TryExportTableData line 12 - BC2ADLS by Stoneridge Software, LLC"ADLSE ExecuteSSI"(CodeUnit 66056).OnRun(Trigger) line 44 - BC2ADLS by Stoneridge Software, LLC\

Tom

Multi-select in BC2ADLS app

Hi Soumya,

As discussed, I will hereby log the request to make the multi-select reset available in the BC app.

In the current situation we have to reset all tables one by one during the development process.

Hope this is a quick fix!

Regards,

Bas

Error in Consolidation_OneEntity

Hello,

  We have been running the extract and synapse processes without error for several months. Beginning last week we began to see the following error on the Consolidation_One entity pipeline. 

Operation on target Copy CSV failed: 'Type=System.InvalidOperationException,Message=Collection was modified; enumeration operation may not execute.,Source=mscorlib,'

It does not seem to occur consistently or on consistent objects. We have to Reset the object in BC and then run the process through to the Synapse pipeline. It has occurred on several different BC objects (SalesHeader-36,WarehouseShipmentHeader-7320,PurchaseLine-39).

 Has anyone else seen this issue on the Consolidation_OneEntity pipeline or others pipelines in this project?

Thanks

Tom

No export of json files if table doesn't have data

Issue:
When you have a multi company export and you add a table and perform an export in a company without any data the json files aren't created.
So when you put multi company export on then you get an error that json files are different and that isn't possible because of multiu company export.

Resolution
In codeunit 82570 "ADLSE Session Manager" you have the function:
local procedure DataChangesExist(TableID: Integer): Boolean
We can add a function a kind of the same function as CheckEntity function in codeunit 82562 "ADLSE Communication"
And if EntityJsonNeedsUpdate or ManifestJsonsNeedsUpdate is true then the result of DataChangesExist will also be true.

Adding more fields in BC tables exporting via extension after the initial load

Hi Team,

We have been trying the below project for one of our projects. It is an excellent solution for the data export from Business Central.

GitHub - microsoft/bc2adls: Exporting data from Dynamics 365 Business Central to Azure data lake storage

My question is regarding - Scenario - If we export a set of fields and in some tables and make the data available in the final Data folder. In the future (maybe in the next couple of months) if there is any new reporting requirement for which we need to add more fields to some tables.

Can you suggest or confirm that the below is correct

The process for that is to add more fields into tables in BC extension and run the job then subsequently run the pipelines to get the data into the Data folder? Would the history data also be added to the data?

I imagine this is a generic scenario in any project.

Hope the above makes sense. Please let me know if you have any questions.

Any suggestions on any other solution to this scenario would be appreciated.

Consolidation Pipelines Optimization

Hello BC2ADLS project team, we've implemented this solution for a couple of our clients and it is working very well, thank you for your effort. It's been a few weeks of running it and now we are in the optimization stage. What we are seeing is that the run time on the synapse consolidation pipelines is increasing as we are adding more and more entities(tables) to the architecture. We were wondering if you can take a look with us, at one of the environments that we have set up and help us think about ways to optimize the run time. Right now the complete consolidation process takes 5hr for one of clients and we are trying to see if there is room for improvement. Thanks, once again!

Export doesn't seem to terminate

It's my first time trying out the extension since seeing it in a presentation last year. It installed fine and I managed to setup all azure components relatively straight forward.

Sadly I'm running into two deadends now.

In BC I added only my item table as an initial test. The files are created on my Azure Blob Storage Container but the task export in BC never terminates and remains in "In process". I've tried with various tables and all deliver the same result.

I thought the data is there might as well just run the Synapse Pipeline once. But no luck there either.
The Lookup Entities fails with the following error:

{
    "errorCode": "2100",
    "message": "ErrorCode=UserErrorFileNotFound,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ADLS Gen2 operation failed for: Operation returned an invalid status code 'NotFound'. Account: 'bc2adlslkn'. FileSystem: 'bc2adls'. Path: 'deltas.manifest.cdm.json'. ErrorCode: 'PathNotFound'. Message: 'The specified path does not exist.'. RequestId: '299df862-001f-0070-2b0f-cd2a0d000000'. TimeStamp: 'Sat, 12 Aug 2023 11:24:06 GMT'.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.Azure.Storage.Data.Models.ErrorSchemaException,Message=Operation returned an invalid status code 'NotFound',Source=Microsoft.DataTransfer.ClientLibrary,'",
    "failureType": "UserError",
    "target": "Entities",
    "details": []
}

I assume this has something to do with the BC task is not completing. What steps can I take to figure this out?

Add Folder path option for container

Hello,

is it possible to add an optional directory path to further determine the export location for the data?

Currently only the container can be specified:
image

Thank you,
Jan

Form.RunModal Error when picking fields for Tables

I've compiled and uploaded the app to one of our BC test environments. It connects fine but when clicking to try and add columns:

image

It's returning the following error:

The following AL methods are limited during write transactions because one or more tables will be locked:

Form.RunModal is not allowed in write transactions.

Codeunit.Run is allowed in write transactions only if the return value is not used. For example, 'OK := Codeunit.Run()' is not allowed.

Report.RunModal is allowed in write transactions only if 'RequestForm = false'. For example, 'Report.RunModal(...,false)' is allowed.`

image

Settings appear fine as it's able to connect to ADLS using the account's etc. Any idea of next steps to troubleshoot/fix or is there something obvious I might have missed in setting it up?

Error when trying to create pipeline "Could not load resource 'Consolidation_OneEntity'. Please ensure no mistakes in the JSON and that referenced resources exist."

Hello,

Following Setup.md at Step 5, I'm able to create all of the Synapse resources and load/save/publish the integration datasets and data flow, however when I try to create a pipeline in the same steps I receive the error above. The same self-referencing error occurs for all three pipelines, and there does not seem to be any additional information that can help me troubleshoot. Any advice or direction on how to resolve is appreciated.

synapse_error

Tables getting bulkier with time

There are log tables in the extension which get bulkier with exports and need to be cleaned up periodically.

Retention policies need to be turned up for these,

  • table 82563 "ADLSE Deleted Record"
  • table 82566 "ADLSE Run"

Always export Entity CustomDimension in Telemetry

When using Telemetry the Event ADLSE-017 - Starting the export for table Exports the Entity as a Custom Dimension.
The Events ADLSE-004 - Exporting with parameters, ADLSE-020 - Exported to deltas CDM folder, and so on do not export the entity at the moment.

It would be nice to always have the Entity Custom Dimension so we can find out how long the export for a given entity took.

Also the count of updated records would be interesting. That way we can calculate the mean duration for the export of a single dataset and can monitor the performance.

Transition to Azure Synapse runtime for Apache Spark 3.2 or 3.3

All,

 Hello and hope all is well. We received a notification from Microsoft that the Synapse workloads we use for the BC2ADL tool are referencing a version of the Azure Synapse runtime for Apache Spark that will expire in September. While we do not use this functionality within our process, I believe we will need to update the references in the pipelines. Is there a current resolution to this in an existing version of the BC2ADL tool? Or is this issue being reviewed by the team?

Below is the notification.

Thanks

Tom Link

Transition to Azure Synapse runtime for Apache Spark 3.2 or 3.3
You're receiving this notice because you have workloads that use Azure Synapse runtime for Apache Spark 2.4 between April 12, 2023, and May 12, 2023.
On 29 September 2023, Azure Synapse runtime for Apache Spark 2.4 will be retired in accordance with the Synapse runtime for Apache Spark lifecycle policy, and any workloads still using it will stop running. Before that date, you'll need to transition your workloads to version 3.2 or 3.3.

Write Transactions in Try Functions

Hi 👋

Thank you for this great extension. It seems very useful.

I have been setting up this extension for a customer on a BC 21.2 OnPrem installation, and in doing so a question has arised.

Observed behavior
I have encountered that there are write transactions inside try functions.

The one I have encountered is found in table 82564 "ADLSE Table Last Timestamp" here:
image

This causes a runtime error when write transactions in try functions are disabled (server configuration key "DisableWriteInsideTryFunctions" = true).

Thus, I have had to disable this configuration to make the export work.

The default configuration is to disable write transactions in try functions because changes are not rolled back as outlined in the documentation here:
https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/devenv-handling-errors-using-try-methods

I have not worked much with cloud implementations, but I assume this is also default behavior in cloud.

However, the procedure names suggest that this is intentional behavior.

Question
I would prefer to use the default configuration to avoid any issues elsewhere in the application.

Therefore, my question is if this is indeed intentional, and if there is good reasoning behind it?
And if not, would there be support for refactoring this to not have write transactions in try functions?

Add additional cdm Properties

CDM has some properties that should be checked for business central.

https://learn.microsoft.com/en-us/common-data-model/1.0om/api-reference/cdm/typeattribute

Name Type Description
DataType CdmDataTypeReference The type attribute's data type, a Common Data Model object reference.
DataFormat CdmDataFormat The type attribute's data format (string, int, etc.).
AttributeContext CdmAttributeContextReference The attribute context of the type attribute.
DefaultValue dynamic The type attribute's default value.
Description string The type attribute's description.
DisplayName string The type attribute's display name.
SourceName string The type attribute's source name.
IsNullable bool? Denotes whether the type attribute can be null or not.
IsPrimaryKey bool? Denotes whether the type attribute is the primary key.
IsReadOnly bool? Denotes whether the type attribute is read only.
MaximumLength int? The type attribute's maximum length.
MaximumValue string The type attribute's maximum value (for data types that this can apply to, like integers).
MinimumValue string The type attribute's minimum value (for data types that this can apply to, like integers).
SourceOrdering int? Denotes the order attributes exist in some underlying source system.
ValueConstrainedToList bool? Denotes whether the type attribute's value is constrained to a list. The values can only be from enums.

I believe the following should be included in BC exports:

Name In BC
DefaultValue InitValue
Description Description
IsNullable not NotBlank
IsPrimaryKey Part of Primary Key
MaximumLength Length of Code/Text Fields e.g. [20]
MaximumValue MaxValue
MinimumValue MinValue

The following two might also be possible:

Name In BC
SourceOrdering Maybe the Enum Value IDs
ValueConstrainedToList If Type is Option or Enum

Feel free to discuss or add additional properties

BC Export Errors

Hello,

  Hope all is well. We have begun to seem some consistent export errors in the BC extension. Below and attached are examples of the errors we are receiving. We  are able to then manually run the process without error. The issue does not look to be consistent as to the table that is failing or the amount of data being sent. Have any other people seen this type of ongoing errors with the export from BC? Is there anything we can do to tune or configure the tool to see more consistent success in exports?

The operation was canceled."ADLSE HttpSSI"(CodeUnit 66059).InvokeRestApi - BC2ADLS by Stoneridge Software, LLC"ADLSE HttpSSI"(CodeUnit 66059).InvokeRestApi line 4 - BC2ADLS by Stoneridge Software, LLC"ADLSE Gen 2 UtilSSI"(CodeUnit 66058).AddBlockToDataBlob line 11 - BC2ADLS by Stoneridge Software, LLC"ADLSE CommunicationSSI"(CodeUnit 66054).FlushPayload line 16 - BC2ADLS by Stoneridge Software, LLC"ADLSE CommunicationSSI"(CodeUnit 66054).Finish line 2 - BC2ADLS by Stoneridge Software, LLC"ADLSE CommunicationSSI"(CodeUnit 66054).TryFinish line 2 - BC2ADLS by Stoneridge Software, LLC"ADLSE ExecuteSSI"(CodeUnit 66056).ExportTableUpdates line 29 - BC2ADLS by Stoneridge Software, LLC"ADLSE ExecuteSSI"(CodeUnit 66056).TryExportTableData line 12 - BC2ADLS by Stoneridge Software, LLC"ADLSE ExecuteSSI"(CodeUnit 66056).OnRun(Trigger) line 46 - BC2ADLS by Stoneridge Software, LLC"ADLSE ExecuteSSI"(CodeUnit 66056).ExportTableUpdates line 32 - BC2ADLS by Stoneridge Software, LLC"ADLSE ExecuteSSI"(CodeUnit 66056).TryExportTableData line 12 - BC2ADLS by Stoneridge Software, LLC"ADLSE ExecuteSSI"(CodeUnit 66056).OnRun(Trigger) line 46 - BC2ADLS by Stoneridge Software, LLC\

Thanks

Tom Link

BCExportError

Exports with different frequencies

Hi,

I have a suggestion for a change to the extension.
It changes the export structurally so I would like to discuss and know if there is support for it.

There might also be workarounds or alternative implementations of the intended feature that I have not considered.

Requested Feature
The customer I am working with would like to be able to export various data (tables) with different frequencies.
The rationale is that some data is needed closer to real-time than other data.

Observed Behavior
Currently, the scheduling functionality is bound to an "ADLSE Setup" record where "ADLSE Table" records are lines to to the "ADLSE Setup"-"header", so to speak.

"ADLSE Setup" is implemented following the singleton pattern.
I assume that the rationale behind the singleton implementation is that you want all setup fields on the "ADLSE Setup" record to be the same for all exports, which makes sense.

Suggested Behavior
My immediate idea is to separate the scheduling functionality from the "ADLSE Setup" record and instead introduce a new header entity for exports.

This would make it possible to gather various tables in a collection that can be setup with appropriate configurations, i.e. scheduling in my case.
I also think this would make sense for potential features in the future. One that comes to mind would be the ability to filter the tables that are exported.

Discussion
First of all, it is sensible to consider if it is even relevant to have separate exports with different scheduling configurations.
For example, one might argue that you could just use the highest frequency as the common denominator.
However, that would be based on the assumption that if the data is needed less frequently, it probably also does not change frequently. I believe this will not always be the case.

Secondly, I am not sure if this change would have significant ramifications elsewhere, e.g. for the pipelines or the export itself. I would not think so, but I am not sure.

Let me know what you think.

Best regards!

Procedure UpdateCdmJsons is failing because of LockTable in a try function

When running the tool (at least on BC22.3) its failing on procedure UpdateCdmJsons. This procedure is executed as part of a try function. Below error was found in the eventLog of my local container.

My guess is that the idea behind this lock table is to prevent different background tasks writing simultaneously to the storage account for updating the manifest. But unfortune this lead to below error. Maybe the LockTable inside a try function has changed in recent BC versions? Not sure.

Best option I guess:
Not implementing as TryFunction. This might lead to errors on sync process of a table. But with current implementation it will just keep the error away, and only difference in the output is telemetry not shown.

Steps to reproduce:

  • Create a docker container based on version 22.3NL (Not sure if it really matters, also 21.5 is failing)
  • Install the BC2ADLSE extension
  • Configure the storage account (Container should be empty)
  • Select a table plus all fields
  • Start the export

Table will stay in progress for ever...
Check out the eventlog of the container. Below error is reported.

Running the same repro steps in an Online sandbox will succeed. It looks like Try functions with a Lock table are handled different in cloud compared to Onpremis.

EventLog/Error

Server instance: BC
Tenant ID: default
Environment Name:
Environment Type: Sandbox
Session type: Background
Session ID: 1087
User:
Type: Microsoft.Dynamics.Nav.Types.Exceptions.NavNclTryFunctionWriteException
ErrorCode: -1
ShouldCalculateALCallStack: True
SuppressMessage: False
ContainsPersonalOrRestrictedInformation: True
DiagnosticsSuppress: False
DiagnosticsMessage: Message not shown because the NavBaseException constructor was used without privacy classification
MessageWithoutPrivateInformation: Message not shown because the NavBaseException constructor was used without privacy classification
SuppressExceptionCreatedEvent: False
FatalityScope: None
ErrorLevel: Error
ALCallStack:
"ADLSE Communication"(CodeUnit 82562).UpdateCdmJsons line 20 - Azure Data Lake Storage Export by cegeka
"ADLSE Communication"(CodeUnit 82562).TryUpdateCdmJsons line 2 - Azure Data Lake Storage Export by cegeka
"ADLSE Execute"(CodeUnit 82561).OnRun(Trigger) line 82 - Azure Data Lake Storage Export by cegeka

Message: Call to the function 'LOCKTABLE' is not allowed inside the call to 'TryUpdateCdmJsons' when it is used as a TryFunction.
Source: Microsoft.Dynamics.Nav.Ncl
HResult: -2146233088
StackTrace:

API ADLSERun-82566 Table

It would be helpful to be able to extract ADLSERun-82566 Table via Web Services (and the standard Power BI Business Central adapter) in order to build e.g. monitoring dashboards.

A similar API is already available for the ADLSETables-82561. However, this only holds the latest status.

CDM data format set to CSV shows unexpected results in Power BI

We've started with the Parquet CDM data format and wanted to review the performance with Power BI if we switched over to CSV. The use case here is to import the files directly into Power BI (CDM Folder view) and without the use of shared metadata tables .

When using the Parquet files everything works perfectly. In the switch I've hit reset on all the tables in BC and deleted all the data of the container in Azure Data Lake.

The export in Business Central and Consolidation pipelines in Synapse works without any issue. Retrieving the data in Power BI shows unexpected results and errors (only when using CSV, the Parquet format doesn't have these issues).

image

What we try'ed was removing all the special characters from the data. There where some like a single quote, ampersand, minus sign and Parenthesis. Still after removing all these the output in Power BI still shows the unexpected values and errors.

Does anybody have a clue or could point me in the right direction?

Missing Step in Documentation

Hi,

I'm new to Datalake and Synaps Pipelines und trying to create the example workflow.

When i create the Datasets it asks me for the source and the format of my data.
image
image

I initially used JSON and now receive the following error when triggering Consolidation_AllEntities:

Operation on target Entities failed: Failure happened on 'Source' side. ErrorCode=JsonInvalidDataFormat,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error occurred when deserializing source JSON file 'deltas/Customer-18/A342654A-F53B-46C0-99D8-6942A6153F18.csv'. Check if the data is in valid JSON object format.,Source=Microsoft.DataTransfer.Common,''Type=Newtonsoft.Json.JsonReaderException,Message=Error parsing NaN value. Path '', line 1, position 1.,Source=Newtonsoft.Json,'

I then tried to change the entity datasource to csv format, but now receive the following error:
image

Where might be my mistake? Thank you in advance!

Flag file at the end of the BC2ADLS export process

In order to save time between starting the Synapse Pipelines and the export from BC2ADLS, a flag file would help to know if the export process from BC2ADLS is finished.

Such a file could be monitored by Synapse in order to start the pipelines once it is e.g. renewed.

Several Dataset in ADLS

We wish to have several “datasets” in ADLS, for now, it seems that all the data come into one unique folder.
We would like to have different ones (ex: Sales, Purchases, Finance…) with maybe shared tables and be able to set up access rights on them.
Any ideas to solve that?

capacity ledger entry (Calculation field)

On the capacity ledger entry we have a calculation field named Direct Cost. This field is based on the calculation method on the workcenter. In my case if a workcenter has a calculation unit of time. Then the calculation would be; take the runtime on the capacity ledger entry * cost price on the workcenter. If the workcenter calculation would be unit, then it uses quantity from the capacity ledget entry * cost price on the workcenter.

With BC2ADLS we could not select calculation fields. Do you know how I can simulate the calculation field with the existing data? The thing is; I could calculate the direct cost with the current data, but the calculation field save the data on the capacity ledger entry with a price that was valid at that time. If I would calculate this myself and there would be a change in the cost price then the output in Direct cost changes also, but what I understand, that this is not the case with a calculation field. So a calculation field is using the price that was valid that moment and not the current price if there would be a change in the cost price. That is also what I want for my data. I could make a snapshot from the workcenter table and make a snapshot for every cost price change, but if there is an easier way without this, that would be great.

Do you know how to solve this?

Azure Blob Storage error about Permissions mismatch

Hello.
Getting in BC the error AuthorizationFailure (This request is not authorized to perform this operation.) upon attempt to CreateContainer.
But the real error is on step back - during running ContainerExists, that fails with the AuthorizationPermissionMismatch (This request is not authorized to perform this operation using this permission.)

Using the BC application source without modifications

I tried to go through the setup steps defined in Installation and configuration and setup everything described there (maybe even a bit more):

  • using account with only one user available, which is Global Admin obviously
    image
  • registered an application bc2adls; specified and noted secret; specified redirect url that is in the instruction
    image
  • extra step - added Azure Storage.user_impersonation to the API permissions for the application
    image
  • created storage account
    image
  • granted Storage Blob Data Contributor role + extra Storage Blob Data Owner, and Storage Blob Data Reader roles to the account (Contributor, Owner and User Access Administrator were initially)
    image
  • created new setup using the page Export to Azure Data Lake Storage in Business Central, specified desired container name bc2adlscont, tenant id, storage account name, generated client id and secret
    image

I assume now it should at least be able to create container and export file to the blob storage, but I get an error
image

What do I miss in setups?

More than one CSV files in entity is being empty of data

I have runned the bc2adls tool in combination with Data Archive functionality (with some changes).
The exported files looked very good. The pipelines are running (Consolidation_AllEntities).
But there is no consolution data in the folder data. Only the headers.

  1. I have removed all the files in my Azure Data Lak
  2. Export te data to Azure Data Lake from BC:
    Before_pipeline.zip
    e
  3. Run the pipeline Consolidation_AllEntities
  4. All the files in data folder have only headers and no data.
    After_pipeline.zip
    Code.zip

Also this is the code change I have done:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.