Coder Social home page Coder Social logo

open-nic-dpdk's Introduction

Instructions for Getting DPDK Drivers Up and Running on AMD OpenNIC

This is one of the three components of the OpenNIC project. The other components are:

This OpenNIC DPDK repo contains a series of patch files and instructions with details for building DPDK with drivers for OpenNIC. The basic sections are:

  1. Install Build Dependencies.
  2. The drivers at Xilinx QDMA's DPDK driver repo need to be patched for OpenNIC using the patch files contained in this repo.
  3. Download DPDK and DPDK pktgen. The DPDK 20.11 distribution needs minor edits to include these drivers.
  4. Build.
  5. Configure proc/cmdline and BIOS if necessary.
  6. Run Vivado to generate the open-nic-shell configured for two CMAC ports and two PFs for testing these.
  7. Gives examples for writing to the hardware registers in order to set up the test.
  8. Gives examples for binding DPDK to devices and testing this driver by running pktgen for the two PFs.

The rest of this document contains the step-by-step instructions for each of the sections listed above. These instructions were written assuming Ubuntu e.g. 18.04 or 20.04.

Section 1: Install Build Dependencies

  1. Install dependencies for building DPDK:

    sudo apt install build-essential
    sudo apt install libnuma-dev
    sudo apt install pkg-config
    sudo apt install python3 python3-pip python3-setuptools
    sudo apt install python3-wheel python3-pyelftools
    sudo apt install ninja-build
    sudo pip3 install meson
    
  2. Install dependencies for building pktgen-dpdk:

    sudo apt install libpcap-dev
    

    Substitute the kernel version from "uname -a" into the command below:

    sudo apt install linux-headers-5.4.0-96-generic
    

Section 2: Download Xilinx QDMA DPDK driver and Apply OpenNIC Patches

  1. This repo contains several patch files that must be applied to the Xilinx QDMA DPDK drivers.

    git clone https://github.com/Xilinx/dma_ip_drivers.git
    cd dma_ip_drivers
    git checkout 7859957
    cd ..
    
  2. Clone this open-nic-dpdk repo:

    git clone https://github.com/Xilinx/open-nic-dpdk
    
  3. Copy the *.patch files contained in this repo into the QDMA driver's directory.

    cp open-nic-dpdk/*.patch dma_ip_drivers
    
  4. Then apply the OpenNIC patches:

    cd dma_ip_drivers
    git apply *.patch
    cd ..
    

Section 3: Download DPDK and pktgen-dpdk

  1. Download DPDK source and build it, including the QDMA drivers.

    wget https://fast.dpdk.org/rel/dpdk-20.11.tar.xz
    tar xvf dpdk-20.11.tar.xz
    cd dpdk-20.11
    cp -R ../dma_ip_drivers/QDMA/DPDK/drivers/net/qdma ./drivers/net
    cp -R ../dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp ./examples
    
  2. Edit drivers/net/meson.build to insert 'qdma' into the list of drivers (~near line 46). Save the changes.

  3. Return to the earlier directory:

    cd ..
    
  4. Download pktgen-dpdk source and build it, including the QDMA drivers.

    wget \
    https://git.dpdk.org/apps/pktgen-dpdk/snapshot/pktgen-dpdk-pktgen-20.11.3.tar.xz
    tar xvf pktgen-dpdk-pktgen-20.11.3.tar.xz
    

Section 4: Build

  1. Build DPDK:

    cd dpdk-20.11
    meson build
    cd build
    ninja
    sudo ninja install
    ls -l /usr/local/lib/x86_64-linux-gnu/librte_net_qdma.so
    sudo ldconfig
    ls -l ./app/test/dpdk-test
    cd ../..
    
  2. Build pktgen-dpdk:

    cd pktgen-dpdk-pktgen-20.11.3
    make RTE_SDK=../dpdk-20.11 RTE_TARGET=build
    

Section 5: Configure proc/cmdline and BIOS if necessary

  1. Make sure that IOMMU is enabled within the BIOS settings.

    Note: Enable VT-d for Intel processors within the BIOS.

  2. Set grub settings to enable hugepages and IOMMU if necessary. The following example grub command line below is based on an AMD machine with e.g. 16GB of RAM, so please adjust the number of hugepages below as appropriate.

    Edit /etc/default/grub to include the following line:

    GRUB_CMDLINE_LINUX=" default_hugepagesz=1G hugepagesz=1G hugepages=4"

    Note: Add intel_iommu=on above for Intel processors.

  3. Update grub:

    sudo update-grub
    
  4. Reboot for the changes to take effect.

    sudo reboot
    
  5. Confirm that hugepages appears within the /proc/cmdline:

    cat /proc/cmdline
    

Section 6: Create the OpenNIC bitfile configured for two CMAC ports

  1. Clone the latest version of the open-nic-shell from github (including some updates after 1.0 release).

    git clone https://github.com/Xilinx/open-nic-shell.git
    
  2. Follow the instructions within https://github.com/Xilinx/open-nic-shell for building the bitfile including specifying the appropriate board and also the following parameters: -num_cmac_port 2 -num_phys_func 2.

  3. Open Vivado to complete the implementation and to generate the bitfile.

  4. Use Vivado to load the bitfile.

  5. Either reboot or pci rescan to redectect the pci devices, for example:

    echo 1 | sudo tee /sys/bus/pci/devices/0000\:d7\:00.0/rescan
    sudo setpci -s d8:00.0 COMMAND=0x02
    

Section 7: Writing to Initialize the OpenNIC hardware registers

  1. Find the pcie bus and device ID for the two PFs:

    $ lspci -d 10ee:
    

    08:00.0 Memory controller: Xilinx Corporation Device 903f

    08:00.1 Memory controller: Xilinx Corporation Device 913f

    Below is an example of finding the sysfile name of the device ID:

    $ lspci -td 10ee:
    

    -+-[0000:d8]-+-00.0 | -00.1 -[0000:00]-

    $ cd -P /sys/bus/pci/devices/0000:d8:00.0 && pwd
    

    /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0

  2. Use a utility program such as pcimem or similar to write to enable the CMAC registers and the QDMA.

    The following example writes should be modified for your pcie device sysfile name. Perform the following register writes.

    Enable the PCIe device for writing:

    sudo setpci -s 08:00.0 COMMAND=0x02;
    
    sudo setpci -s 08:00.1 COMMAND=0x02;
    

    Write to QDMA:

    sudo pcimem \
    /sys/devices/pci0000:00/0000:00:03.1/0000:08:00.0/resource2 0x1000 w 0x1;
    
    sudo pcimem \
    /sys/devices/pci0000:00/0000:00:03.1/0000:08:00.0/resource2 0x2000 w 0x00010001;
    

    Write to enable CMAC0:

    sudo pcimem \
    /sys/devices/pci0000:00/0000:00:03.1/0000:08:00.0/resource2 0x8014 w 0x1;
    
    sudo pcimem \
    /sys/devices/pci0000:00/0000:00:03.1/0000:08:00.0/resource2 0x800c w 0x1;
    

    Write to enable CMAC1:

    sudo pcimem \
    /sys/devices/pci0000:00/0000:00:03.1/0000:08:00.0/resource2 0xC014 w 0x1;
    
    sudo pcimem \
    /sys/devices/pci0000:00/0000:00:03.1/0000:08:00.0/resource2 0xC00c w 0x1;
    

Section 8: Binding DPDK and Testing by Running pktgen

  1. Edit dpdk-devbind.py so that it can find the qdma PCIe class/vendor/device, for example like below.
    (The file dpdk-devbind.diff in this repo also contains these same lines from diff.)

    31a32,35
    >
    > qdma = {'Class: '02', 'Vendor': '10ee', 'Device': '903f,913f',
    >               'SVendor': None, 'SDevice': None}
    >
    62c66
    < network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class]
    ---
    > network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class, qdma]
    
  2. Load dpdk-devbind.py with arguments for vfio and the two pcie bus and device identifiers:

    sudo dpdk_patched/dpdk-20.11/usertools/dpdk-devbind.py -b vfio-pci \
    08:00.0 08:00.1
    
  3. Test by running pktgen-dpdk (note the command below specifies example device IDs and bus IDs for the two PFs, please substitute with the appropriate IDs):

    sudo dpdk_patched/pktgen-dpdk/usr/local/bin/pktgen -a 08:00.0 -a 08:00.1 \
    -d librte_net_qdma.so -l 4-10 -n 4 -a 00:03.0 -a 00:03.1 -- -m [6:7].0 -m [8:9].1
    

Copyright Notice and Disclaimer

ยฉ Copyright 2020 โ€“ 2022 Xilinx, Inc. All rights reserved.

This file contains confidential and proprietary information of Xilinx, Inc. and is protected under U.S. and international copyright and other intellectual property laws.

DISCLAIMER

This disclaimer is not a license and does not grant any rights to the materials distributed herewith. Except as otherwise provided in a valid license issued to you by Xilinx, and to the maximum extent permitted by applicable law: (1) THESE MATERIALS ARE MADE AVAILABLE "AS IS" AND WITH ALL FAULTS, AND XILINX HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT, OR FITNESS FOR ANY PARTICULAR PURPOSE; and (2) Xilinx shall not be liable (whether in contract or tort, including negligence, or under any other theory of liability) for any loss or damage of any kind or nature related to, arising under or in connection with these materials, including for any direct, or any indirect, special, incidental, or consequential loss or damage (including loss of data, profits, goodwill, or any type of loss or damage suffered as a result of any action brought by a third party) even if such damage or loss was reasonably foreseeable or Xilinx had been advised of the possibility of the same.

CRITICAL APPLICATIONS

Xilinx products are not designed or intended to be fail-safe, or for use in any application requiring failsafe performance, such as life-support or safety devices or systems, Class III medical devices, nuclear facilities, applications related to the deployment of airbags, or any other applications that could lead to death, personal injury, or severe property or environmental damage (individually and collectively, "Critical Applications"). Customer assumes the sole risk and liability of any use of Xilinx products in Critical Applications, subject only to applicable laws and regulations governing limitations on product liability.

THIS COPYRIGHT NOTICE AND DISCLAIMER MUST BE RETAINED AS PART OF THIS FILE AT ALL TIMES.

open-nic-dpdk's People

Contributors

cneely-amd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

open-nic-dpdk's Issues

Cannot send packets

Hi, thanks for your open-source in the dpdk solution. I tried this and I am not able to send packets.

When I use this DPDK, every installation instruction works perfectly until the last step - send packet via pktgen. It should send packets, but I can't receive any packets on the other side of the NIC. Do you have any better way to debug it or how you debug it? Thanks!

Here is the log pktgen generated.

$ ~/git/open-nic-dpdk_workspace/pktgen-dpdk-pktgen-20.11.3$ sudo usr/local/bin/pktgen -a 5e:00.0 -d librte_net_qdma.so -l 0-2 -n 2 -a 5d:00.0  -- -m [6:7].0

Copyright (c) <2010-2020>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected 20 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: 16 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_qdma (10ee:903f) device: 0000:5e:00.0 (socket 0)
Device Type: Soft IP
IP Type: EQDMA Soft IP
Vivado Release: vivado 2020.2
PMD: qdma_get_hw_version(): QDMA RTL VERSION : RTL Base

PMD: qdma_get_hw_version(): QDMA DEVICE TYPE : Soft IP

PMD: qdma_get_hw_version(): QDMA VIVADO RELEASE ID : vivado 2020.2

PMD: qdma_identify_bars(): QDMA config bar idx :0

PMD: qdma_identify_bars(): QDMA AXI Master Lite bar idx :2

PMD: qdma_identify_bars(): QDMA AXI Bridge Master bar idx :-1

PMD: qdma_eth_dev_init(): QDMA device driver probe:
PMD: qdma_device_attributes_get(): qmax = 512, mm 1, st 1.

PMD: qdma_eth_dev_init(): PCI max bus number : 0x5e
PMD: qdma_eth_dev_init(): PF function ID: 0
PMD: QDMA PMD VERSION: 2020.2.1
qdma_dev_entry_create: Created the dev entry successfully
EAL: No legacy callbacks, legacy socket not created

*** Copyright (c) <2010-2020>, Intel Corporation. All rights reserved.
*** Pktgen  created by: Keith Wiles -- >>> Powered by DPDK <<<

 Port: Name         IfIndex Alias        NUMA  PCI
    0: net_qdma        0                   0   10ee:903f/5e:00.0

Initialize Port 0 -- TxQ 1, RxQ 1
PMD: qdma_dev_configure(): Configure the qdma engines

PMD: qdma_dev_configure(): Bus: 0x0, PF-0(DEVFN) queue_base: 0

PMD: qdma_dev_rx_queue_setup(): Configuring Rx queue id:0

PMD: qdma_dev_tx_queue_setup(): Configuring Tx queue id:0 with 512 desc

PMD: qdma_dev_tx_queue_setup(): Tx ring phys addr: 0x10AA99000, Tx Ring virt addr: 0x10AA99000
Src MAC 15:16:17:18:19:1a


PMD: qdma_dev_start(): qdma-dev-start: Starting

PMD: qdma_dev_link_update(): Link update done


WARNING: Nothing to do on lcore 1: exiting
WARNING: Nothing to do on lcore 2: exiting
- Ports 0-0 of 1   <Main Page>  Copyright (c) <2010-2020>, Intel Corporation
  Flags:Port        : -------Single      :0PMD: qdma_dev_link_update(): Link update done
|ink State          :        <UP-100000-FD>      ---Total Rate---
Pkts/s Max/Rx       : -------Single      :0PMD: qdma_dev_link_update(): Link update done
       Max/Tx       :        <UP-100000-FD>                   0/0
MBits/s Rx/Tx       :                   0/0                   0/0
Broadcast           :                   0/0                   0/0
Multicast           :                   0/0                   0/0
Sizes 64            :                     0
      65-127        :                     0
      128-255       :                     0
      256-511       :                     0
      512-1023      :                     0
      1024-1518     :                     0
Runts/Jumbos        :                     0
ARP/ICMP Pkts       :                     0
Errors Rx/Tx        :                   0/0
Total Rx Pkts       :                   0/0
      Tx Pkts       :                   0/0
      Rx MBs        :                     0
      Tx MBs        :                     0
                    :                     0
Pattern Type        :                     0
Tx Count/% Rate     :         Forever /100%
Pkt Size/Tx Burst   :             64 /   32
TTL/Port Src/Dest   :         4/ 1234/ 5678
Pkt Type:VLAN ID    :       IPv4 / TCP:0001
802.1p CoS/DSCP/IPP :             0/  0/  0
VxLAN Flg/Grp/vid   :      0000/    0/    0
IP  Destination     :           192.168.1.1
    Source          :        192.168.0.1/24
MAC Destination     :     00:00:00:00:00:00
    Source          :     15:16:17:18:19:1a
PCI Vendor/Addr     :     10ee:903f/5e:00.0

-- Pktgen 20.11.3 (DPDK 20.11.0)  Powered by DPDK  (pid:7051) -----------------
** Version: DPDK 20.11.0, Command Line Interface without timers
Pktgen:/>

I use U250 and tested with open-nic-driver.

Lam

Not seeing Memory Controller after programming

Hi,

I got to the step where I programmed in open-nic-shell targeting a U50 device. After a cold reboot I am not seeing a memory controller as the steps show:

Screenshot 2023-08-23 at 1 39 49 PM

I have a u50 which has just one port and get this running the commands above:

don@machine:~$ sudo lspci -d 10ee:
04:00.0 Network controller: Xilinx Corporation Device 903f
06:00.0 Signal processing controller: Xilinx Corporation Device f401
don@machine:~$ sudo lspci -td 10ee:
don@machine:~$ 

The device 04:00.0 had disappeared after the flash and reappeared after a cold reboot

04:00.0 Network controller: Xilinx Corporation Device 903f
	Subsystem: Xilinx Corporation Device 0007
	Physical Slot: 3
	Flags: fast devsel, IRQ 5, NUMA node 0
	Memory at c5800000 (64-bit, non-prefetchable) [disabled] [size=256K]
	Memory at c5400000 (64-bit, non-prefetchable) [disabled] [size=4M]
	Capabilities: [40] Power Management version 3
	Capabilities: [60] MSI-X: Enable- Count=10 Masked-
	Capabilities: [70] Express Endpoint, MSI 00
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [1c0] Secondary PCI Express
	Capabilities: [1f0] Virtual Channel

Am I doing something wrong? Any help or guidance is appreciated. Thanks in advance !!

Performance of open-nic-dpdk

Hi,
How to reproduce the results reported in "Xilinx Answer 71453 QDMA Performance Report" with PKTGEN, only getting 10Gbps link speed on threadripper pro with U280 card. Are these results only for the QDMA example design or they apply to Open NIC also?
Regards,
Anees

Pktgen 20.11.3 may show inaccurate statistics

I was running a loopback test. I found that the Tx MBits/s rate doesn't match up with Rx rate as shown below. However, once I used Pktgen 21.03.1. It is gone.

I also noticed this discrepancy in this post.

Ports 0-1 of 2   <Main Page>  Copyright (c) <2010-2020>, Intel Corporation
  Flags:Port        : P------Range       :0 P------Range       :1PMD: qdma_dev_link_update(): Link update done
Link State          :        <UP-100000-FD>        <UP-100000-FD>      ---Total Rate---
Pkts/s Max/Rx       :       7675460/7557906       7368360/7251015     15043820/14808921
       Max/Tx       :       7383936/7263393       7695872/7573920     15079808/14837313
MBits/s Rx/Tx       :            48712/1394            45772/1454            94484/2848
Broadcast           :                     0                     0
Multicast           :                     0                     0
Sizes 64            :               1558894                854696
      65-127        :              20026127              20171588
      128-255       :              41562202              40255501
      256-511       :              79199307              82529526
      512-1023      :             158741787             152219145
      1024-1518     :             154165462             140864226
Runts/Jumbos        :                   0/0                   0/0
ARP/ICMP Pkts       :                   0/0                   0/0
Errors Rx/Tx        :                   0/0                   0/0
Total Rx Pkts       :             452242056             434001460
      Tx Pkts       :             434765600             453223840
      Rx MBs        :               2914836               2739635
      Tx MBs        :                 83474                 87018
                    :
Pattern Type        :               abcd...               abcd...
Tx Count/% Rate     :         Forever /100%         Forever /100%
Pkt Size/Tx Burst   :             64 /   32             64 /   32
TTL/Port Src/Dest   :         4/ 1234/ 5678         4/ 1234/ 5678
Pkt Type:VLAN ID    :       IPv4 / TCP:0001       IPv4 / TCP:0001
802.1p CoS/DSCP/IPP :             0/  0/  0             0/  0/  0
VxLAN Flg/Grp/vid   :      0000/    0/    0      0000/    0/    0
IP  Destination     :           192.168.1.1           192.168.0.1
    Source          :        192.168.0.1/24        192.168.1.1/24
MAC Destination     :     15:16:17:18:19:1a     15:16:17:18:19:1a
    Source          :     15:16:17:18:19:1a     15:16:17:18:19:1a
PCI Vendor/Addr     :     10ee:903f/3b:00.0     10ee:913f/3b:00.1

-- Pktgen 20.11.3 (DPDK 20.11.0)  Powered by DPDK  (pid:12297) ----------------

Bottleneck on the minimum size packet performance.

Hi, Chris,

As mentioned in the OpenNIC FAQ, "This slower clock domain cannot handle the theoretical worst-case packet rate but reflects the fact that, for packets to and from the host over PCIe, the QDMA only runs at 250 MHz." I am wondering whether it is possible to improve the performance on worst-case packet rate for packets hitting the host? Or is it constrained by the hardware/QDMA limitations?

Thanks!
Han

Packet loss with more than 14 queues

I am experiencing considerable packet loss (around 50%) in DPDK when I use 15 or more queues for QDMA regardless of throughput. For the same throughput, when I use 14, every DPDK thread gets all the packets but when I switch to 15 queues and more, every queue loses almost half of the packets it is supposed to get.

Pktgen application does not receive packets

Hi Team,
I was using Alveo u200, and I performed these steps (https://github.com/Xilinx/open-nic-dpdk ) to test pktgen. I had used two QSFP's connected in loopback.
I wants to know how to get the second PF's ie., 00:03.0 and 00:03.1, in pktgen command? sudo Builddir/app/pktgen -a 08:00.0 -a 08:00.1 -d librte_net_qdma.so -l 4-10 -n 4 -a 00:03.0 -a 00:03.1 -- -m [6:7].0 -m [8:9].1.

I have only two Network controller BDF's as 08:00.0 and 08:00.1. I had run pktgen command as sudo Builddir/app/pktgen -a 08:00.0 -d librte_net_qdma.so -l 4-10 -n 4 -a 08:00.1 -- -m [6:7].0 -m [8:9].1 . In this the TX packet count shows 511, but RX count shows zero. What's the cause for 0 RX packets.

How to get the value 03:00.0 and 03.00.1? Does it uses different test setup (other than loopback)?
As I am new to DPDK please let me know what changes to be made. Any suggestions are helpful.

Cant capture packets using dpdk-pdump

Hi,

We are running into an issue trying to capture packets with dpdk-pdump.

testpmd was run as the primary process using the following command below:

.../build/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained --forward-mode=rxonly
EAL: Detected 72 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_qdma (10ee:903f) device: 0000:04:00.0 (socket 0)
PMD: QDMA PMD VERSION: 2020.2.1
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set rxonly packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 15:16:17:18:19:1A
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
testpmd> port stop all
Stopping ports...
Checking link statuses...
Done
testpmd> port start all
Port 0: 15:16:17:18:19:1A
Checking link statuses...
Done
testpmd> quit

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
Port 1 is closed
Done

Bye...
.../build/app# 

Packets were being transmitted to this port at all times. The corresponding pdump was started against the same device and its output shows it is indeed chained to testpmd. Closing testpmd stops pdump window as well.

.../build/app/dpdk-pdump -- --pdump 'port=0,queue=*,rx-dev=/tmp/test-capture-1004.pcap'
EAL: Detected 72 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_279378_2cfa511064dd9
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_qdma (10ee:903f) device: 0000:04:00.0 (socket 0)
EAL: No legacy callbacks, legacy socket not created
Port 1 MAC: 02 70 63 61 70 00
 core (0), capture for (1) tuples
 - port 0 device ((null)) queue 65535
Primary process is no longer active, exiting...
EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or directory
EAL: Fail to send request /var/run/dpdk/rte/mp_socket:mp_pdump
pdump_prepare_client_request(): client request for pdump enable/disable failed
EAL: Cannot find plugged device ()
##### PDUMP DEBUG STATS #####
 -packets dequeued:			0
 -packets transmitted to vdev:		0
 -packets freed:			0
...:/home/vcr/Desktop#

What is the problem here? Any suggestions?

Needs to program FPGA after every run

I have this weird problem with QDMA DPDK that the first run after programming FPGA works. But the subsequent runs don't receive/transmit anything or they terminate in rte_eal_init().
Did anyone have similar problems?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.