How Existing Kernel Driver Should Be Initialized as Pci Memory-Mapped

How existing kernel driver should be initialized as PCI memory-mapped?

So something like a ARM CPU connected to an Artix FPGA over PCIe right?

Yes, you would need a custom PCIe driver. The PCIe configuration and data spaces would have to be mapped. Have a look at pci_resource_{start, len} and pci_remap_bar functions. You can then use pci_get_device to get a pointer to the struct device and retrieve the virtual address of the PCIe configuration space. The UART driver can then use the struct device pointer and it's register map should be at some offset to the virtual address of the PCIe configuration space as per your design. You can invoke the probe call of UARTlite IP driver in your own driver.

"Existing kernel drivers such as xilinx have specific way to be registered (as tty device), if they are mapped directly to cpu memory map as done here with device tree". Note that this is true if we are only talking of tty devices. A GPIO peripheral IP won't be expose as tty but in /sys/class/gpio.

PCIe with multiple devices in kernel

Based on one of your other questions, I am assuming you are talking about FPGA with custom IP blocks connected over PCIe to a ARM CPU complex.

  1. PCIe driver does not handle any of these devices. The memory map/space for these IP blocks would be exposed over PCIe. When any of these peripheral devices trigger an IRQ, the IRQ would become a PCIe MSI IRQ and given to the respective peripheral driver's IRQ handler.

  2. There will not be multiple PCIe device drivers.

See my response to one of your another queries here.

mapping a memory region from kernel

You need to remap the memory region with something like ioremap() after you have requested it.

Then, as Tsyvarev and others mentioned, create and export a function in your "parent" driver that returns the mapped memory.

Here is some rough code:

void * mapped_mem;

void * map_addr(unsigned int phy_addr, char * name) {

struct resource * resource;
void * mapped_mem;

resource = request_mem_region(phy_addr, page_size * 4, name);
// check for errors

mapped_mem= ioremap_nocache(phy_addr, page_size * 4);
// check for errors
return mappedMem;

//handle errors
}

void * get_mapped_addr(void) {
return mapped_mem
}

EXPORT_SYMBOL( get_mapped_addr);

Now, mapped_mem should actually be tracked as part of your devices private info, but I figure thats beyond the scope of the question. Also, make sure to check for all possible errors. Make sure that request_mem_region() returns something >0 and not Null.

Old style PCI probing

I think you refer to linux 2.4 or older. Current kernel device model with busses, devices and drivers has always been part of the 2.6 series.

What is your question exactly ?

A list of PCI devices is made at boot time. Then when a driver is registered, the pci_driver structure id_table field is used to match
with the devices present on the bus. Then the pci_driver probe function is called with a pointer to the device structure that matched.

  • pci_driver is registered
  • for each device present on the bus, id element of the device(product id and vendor id) are compared to id element in the id_table provided by pci_driver
  • if there is a match, the pci_driver probe function is called, and in this probe function you can register a char device, or a block device etc ..

So it is not very different from 2.4, except all the probing, matching driver and devices, etc... is handled by the "device core" and not by the pci driver.

For a detailed explanation, see this PDF and this page



Related Topics



Leave a reply



Submit