Wednesday, December 01, 2010

tracking bcm source code

First, import original bcm code into our repository

1. find . –type d | xargs cvs add

2. find . –type f | grep –v CVS | xargs cvs add

Get CVSed source code from repository

3. cvs co sswitch

Create branch

4. cvs tag –b sdk566-patches

5. cvs update –r sdk566-patches

Modify the branch

Update trunk to sdk580

Creeate a new branch

6. cvs tag –b sdk580-patches

7. cvs update –r sdk580-patches

Incoporate changes in branch to the truck

8. cvs update –j sdk566-patches

Modify the code.

Wednesday, September 08, 2010

ubuntu 10.04: sluggish console on hyper-v

I’m very disappointed to see that the server edition console is unbearably slow under hyper-v. To work around this issue, you need to disable the frame buffer module:

edit /etc/modprobe.d/blacklist-framebuffer.conf and add the following line:

blacklist vga16fb

Reboot and the console should be fine.

Sunday, August 08, 2010

memory latency under Nehalem arch

Memory latency matters. As we can see, accessing memory cause 65ns to 106ns in Nehalem arch.

This figure is from paper by Daniel Molka, “Memory Performance and Cache Coherency Effects on an Intel Nehalem Multiprocessor System”, Intl Conf on Parallel Architectures and Compilatoin Techniques, 2009.

image

Tuesday, June 29, 2010

Microsoft Failover Cluster

Here are basic articles for using hyper-v and microsoft failover cluster:

Hyper-V: Using Hyper-V and Failover Clustering

Hyper-V: Using Live Migration with Cluster Shared Volumes in Windows Server 2008 R2

Failover Clusters

Here are some usefully techniques to manage microsoft failover cluster:

1. Forcibly removing failover cluster feature after cluster failure.

2. Duplicated MAC address for Microsoft Failover Cluster Virtual Miniport driver. In our testbed, the OS are cloned, so they have same MAC addresss of Failover cluster virtual miniport driver. This prevents from two nodes joining the cluster. To join them, we must reinstall failover cluster feature on these machines.

3. How to create the cluster.log in Windows Server 2008 Failover Clustering

Thursday, June 03, 2010

Sharing Memory Between Drivers and Applications

PVOID 
CreateAndMapMemory()
{
PVOID buffer;
PMDL mdl;
PVOID userVAToReturn;

//
// Allocate a 4K buffer to share with the application
//

buffer = ExAllocatePoolWithTag(NonPagedPool, PAGE_SIZE, 'MpaM ');

if(!buffer) {
return(NULL);
}

//
// Allocate and initalize an MDL that describes the buffer
//

mdl = IoAllocateMdl(buffer,
PAGE_SIZE,
FALSE,
FALSE,
NULL);

if(!mdl) {
ExFreePool(buffer);
return(NULL);
}

//
// Finish building the MDL -- Fill in the "page portion "
//

MmBuildMdlForNonPagedPool(mdl);

//
// The preferred V5 way to map the buffer into user space
//

userVAToReturn =
MmMapLockedPagesSpecifyCache(mdl, // MDL
UserMode, // Mode
MmCached, // Caching
NULL, // Address
FALSE, // Bugcheck?
NormalPagePriority); // Priority

//
// If we get NULL back, the request didn 't work.
// I 'm thinkin ' that 's better than a bug check anyday.
//

if(!userVAToReturn) {
IoFreeMdl(mdl);
ExFreePool(buffer);
return(NULL);
}

//
// Store away both the mapped VA and the MDL address, so that
// later we can call MmUnmapLockedPages(StoredPointer, StoredMdl)
//

StoredPointer = userVAToReturn;
StoredMdl = mdl;

DbgPrint( "UserVA = 0x%0x\n ", userVAToReturn);

return(userVAToReturn);
}


程序工作原理:



驱动程序可以使用任意标准的方法来分配要共享的缓冲,如果没有特殊的要求并且大小适度,可以将它分配在非分页池中。





驱动程序使用IoAllocateMdl()分配一个MDL来描述这个缓冲,然后调用MmBuildMdlForNonPagedPool()。这个函数修改MDL以描述内核模式中一个非分页内存区域。





当用来描述共享缓冲的MDL建立起来以后,驱动程序现在可以准备将缓冲映射到用户进程的地址空间了,由MmMapLockedPagesSpecifyCache() 这个函数完成。


你必须要在你想要映射共享缓冲的进程上下文环境中调用MmMapLockedPagesSpecifyCache(),并且指定AccessMode参数为UserMode。这个函数返回由MDL映射的用户态虚拟地址。 驱动程序可以把这个值作为用户程序发送IOCTL请求时的返回值返回给用户程序。



注意:IoAllocateMd只分配MDL,并不负责更新MDL里面内容中的page numbers,需要使用MmBuildMdlForNonPagedPool来全完初始化。



ref: A Common Topic Explained - Sharing Memory Between Drivers and Applications

Sunday, May 30, 2010

NPU

Broadcom

     XGS Core Product Line

          BCM88025

          BCM8823x

          BCM88235

Ethernity

          ENET3x00

          ENET4x00

EZchip

          NPA

          NP-3

          NP-4

LSI

          ACP3448

          APP3300

Netronome

          IXP2855 (Intel Castine)

          NFP-3216

          NFP-3240

TPack

          TPX3103

          TPX4004

          TPX5104

Wintegra

          WinPath2

          WinPath2-Lite

          WinPath3

          WinPath3-SL

Xelerated

          AX310

          HX320

          HX330

Legacy Vendors:

     AppliedMicro

          nP37x0

          nP3705

     Exar (Hifn)

          5NP4G

     Mindspeed

          M27479

          M27480

          M27481

国内一些网络处理芯片及板卡公司

苏州盛科网络公司

48x1/2.5 GE and 4xGE ports

8x10 ports

恒为

InfiniWay F341: PCI-E X4主机接口,4xSFP,不清楚用的什么芯片

恒杨

image

SempGate NSA, 芯片是ASIC

SempGate MCP, Cavium OCTEON multi-core NP

image

Sunday, May 02, 2010

Expanding NTFS partition size in VHD

1. First use vhd resizer to expand vhd size.

2. attach the vhd in computer management->Disk management->attach vhd. (windows 7 or server 2008). Expand the NTFS partition.

Friday, April 09, 2010

Passthrough PCI Device in VM

Linux KVM/Xen support passthrough PCI device. It allows direct assign a PCI device to a virtual machine.

To do this, you need

1. check whether your hardware support VT-d. Here is the list of system that support VT-d.

2. You need to configure and comple the kernel to enable IOMMU support. Here is the procedure to enable VT-d in KVM.

3. Then you should create VM, and assign an PCI device to the VM.

I assigned intel 82571 nic to a VM and find the throughput reaches 930Mb/s easily with very little CPU consumed.

However, I am not quite familiar with the KVM or Xen management. Not quite familiar with bridging mode.

However, you cannot assign PCI device to the VM using libvirt.

Other references:

1. How To Compile A Kernel – The Ubuntu Way.

2. Hot Add PCI devices

3. Libvirt: Qemu/KVM hypervisor driver

4. Using Libvirt under Ubuntu

Saturday, March 27, 2010

scvmm 2008 self-service portal

self-service portal is the simplified version of DIT-SC. Here is the instruction to install and config it.

scvmm cannot deploy vm

We meet error 2912. The issue is caused by a problem with the host certificate (incorrect name, IP instead of FQDN or NetBIOS) or the certificate is missing from the VMM server. To solve this problem, see KB971264.

You can use certmgr.exe to see the certificates the vmm server has. The certificates are under TrustedPeople.

Besides, you need to uninstall the VMM agent on on the host. Here is the instruction.

Sunday, March 07, 2010

windows mobile sms

HKEY_LOCAL_MACHINE\Software\Microsoft\Inbox\Settings\SMSNoSentMsg=1 DWORD

Enable or Disable the SMS (Text Message) Sent Notification Bubble

HKEY_CURRENT_USER\Software\Microsoft\Inbox\Settings\OEM\SMSInboxThreadingDisabled=1 DWORD

Disable SMS Text Message Threading or Conversation Mode in Windows Mobile 6.1

Friday, March 05, 2010

Embedded PowerPC

CFE (Common Firmware Environment) from broadcom. U-boot from DENX. There are bootloaders, just like grub, lilo. They are used to boot linux kernels.

uImage: a container of linux kernel and maybe ramdisk. U-boot and CFE both support this format. The uImage has 64 bytes file header which describe the images it contains followed by the images.

FDT: Flat Device Tree. This is a standard format to describe hardware of the embedded system. The bootloader passes FDT to linux kernel to help him boot. Usually, you need a DTS file and use DTC to convert it into DTB file. The the bootloader will parse the file and give it the the kernel. For CFE and MPC8548 device, the FDT is hardcoded in CFE. Here are details of FDT. “A Symphony of Flavours: Using the device tree to describe embedded hardware”.

Usually to boot a embedded ppc system. You need:

  1. bootloader. I use CFE as it is already on the system.
  2. uImage. I get kernel source from web and use ELDK 4.1 to compile it.
  3. rootfs. The root filesystem can contain init files, kernel modules and applications. I get two, one from ELDK and another from an existing uImage.

Besides, you may also setup tftp server and nfsd server. TFTP server is used to store uImage and nfsd server can be the rootfs.

Here is some tips to get ramdisk from uImage.

  1. use mkimage to get size of ramdisk. mkimage –l uImage
  2. calcaluate the offset of ramdisk. Suppose the ramdisk is the last image in uImage. Its offset is the (total length of uImage) - (length of ramdisk)
  3. Use dd to get ramdisk. dd if=uImage –bs=offset skip=1 =of=ramdisk.gz
  4. The rest can be found here. To note, not all ramdisk can be mounted. Some ramdisk is cpio archive. You need to use cpio –i –no-absolute-filenames < ramdisk to extract its content.

Thursday, February 25, 2010

iSCSI v.s. FCoE

Today, MS and Intel annouced the Windows 2008 R2 iSCSI reaches 1 million IPS on a single 10GbE port performance. It means that “There is no server I/O bottleneck. If you are going with an iSCSI SAN use the native infrastructure built into the server, OS and adapter. If you are deciding between iSCSI and FC, know that at the very least the performance on the client side is a wash. Server-side ease-of-use and cost if unquestionably in ISCSI’s favor. “

The question then goes to the comparision between iSCSI and FCoE. FCoE requires DCB (Data Center Bridging; aka DCE or CEE). However iSCSI does not require that.

As iSCSI has comparable performance with FCoE, do we really need FCoE? Here is Dell enigneer recommend.