|
- <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
- <html><head><title>Python: module bilinear_cnn_fc</title>
- <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
- </head><body bgcolor="#f0f0f8">
-
- <table width="100%" cellspacing=0 cellpadding=2 border=0 summary="heading">
- <tr bgcolor="#7799ee">
- <td valign=bottom> <br>
- <font color="#ffffff" face="helvetica, arial"> <br><big><big><strong>bilinear_cnn_fc</strong></big></big> (version 1.2, 2018-01-09)</font></td
- ><td align=right valign=bottom
- ><font color="#ffffff" face="helvetica, arial"><a href=".">index</a><br><a href="file:/data/zhangh/product/b-cnn/src/bilinear_cnn_fc.py">/data/zhangh/product/b-cnn/src/bilinear_cnn_fc.py</a></font></td></tr></table>
- <p><tt>Fine-tune the fc layer only for bilinear CNN.<br>
- <br>
- Usage:<br>
- CUDA_VISIBLE_DEVICES=0,1,2,3 ./src/bilinear_cnn_fc.py --base_lr 0.05 --batch_size 64 --epochs 100 --weight_decay 5e-4</tt></p>
- <p>
- <table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
- <tr bgcolor="#aa55cc">
- <td colspan=3 valign=bottom> <br>
- <font color="#ffffff" face="helvetica, arial"><big><strong>Modules</strong></big></font></td></tr>
-
- <tr><td bgcolor="#aa55cc"><tt> </tt></td><td> </td>
- <td width="100%"><table width="100%" summary="list"><tr><td width="25%" valign=top><a href="cub200.html">cub200</a><br>
- </td><td width="25%" valign=top><a href="os.html">os</a><br>
- </td><td width="25%" valign=top><a href="torch.html">torch</a><br>
- </td><td width="25%" valign=top><a href="torchvision.html">torchvision</a><br>
- </td></tr></table></td></tr></table><p>
- <table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
- <tr bgcolor="#ee77aa">
- <td colspan=3 valign=bottom> <br>
- <font color="#ffffff" face="helvetica, arial"><big><strong>Classes</strong></big></font></td></tr>
-
- <tr><td bgcolor="#ee77aa"><tt> </tt></td><td> </td>
- <td width="100%"><dl>
- <dt><font face="helvetica, arial"><a href="builtins.html#object">builtins.object</a>
- </font></dt><dd>
- <dl>
- <dt><font face="helvetica, arial"><a href="bilinear_cnn_fc.html#BCNNManager">BCNNManager</a>
- </font></dt></dl>
- </dd>
- <dt><font face="helvetica, arial"><a href="torch.nn.modules.module.html#Module">torch.nn.modules.module.Module</a>(<a href="builtins.html#object">builtins.object</a>)
- </font></dt><dd>
- <dl>
- <dt><font face="helvetica, arial"><a href="bilinear_cnn_fc.html#BCNN">BCNN</a>
- </font></dt></dl>
- </dd>
- </dl>
- <p>
- <table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
- <tr bgcolor="#ffc8d8">
- <td colspan=3 valign=bottom> <br>
- <font color="#000000" face="helvetica, arial"><a name="BCNN">class <strong>BCNN</strong></a>(<a href="torch.nn.modules.module.html#Module">torch.nn.modules.module.Module</a>)</font></td></tr>
-
- <tr bgcolor="#ffc8d8"><td rowspan=2><tt> </tt></td>
- <td colspan=2><tt>B-CNN for CUB200.<br>
- <br>
- The B-CNN model is illustrated as follows.<br>
- conv1^2 (64) -> pool1 -> conv2^2 (128) -> pool2 -> conv3^3 (256) -> pool3<br>
- -> conv4^3 (512) -> pool4 -> conv5^3 (512) -> bilinear pooling<br>
- -> sqrt-normalize -> L2-normalize -> fc (200).<br>
- The network accepts a 3*448*448 input, and the pool5 activation has shape<br>
- 512*28*28 since we down-sample 5 times.<br>
- <br>
- Attributes:<br>
- features, torch.nn.<a href="torch.nn.modules.module.html#Module">Module</a>: Convolution and pooling layers.<br>
- fc, torch.nn.<a href="torch.nn.modules.module.html#Module">Module</a>: 200.<br> </tt></td></tr>
- <tr><td> </td>
- <td width="100%"><dl><dt>Method resolution order:</dt>
- <dd><a href="bilinear_cnn_fc.html#BCNN">BCNN</a></dd>
- <dd><a href="torch.nn.modules.module.html#Module">torch.nn.modules.module.Module</a></dd>
- <dd><a href="builtins.html#object">builtins.object</a></dd>
- </dl>
- <hr>
- Methods defined here:<br>
- <dl><dt><a name="BCNN-__init__"><strong>__init__</strong></a>(self)</dt><dd><tt>Declare all needed layers.</tt></dd></dl>
-
- <dl><dt><a name="BCNN-forward"><strong>forward</strong></a>(self, X)</dt><dd><tt>Forward pass of the network.<br>
- <br>
- Args:<br>
- X, torch.autograd.Variable of shape N*3*448*448.<br>
- <br>
- Returns:<br>
- Score, torch.autograd.Variable of shape N*200.</tt></dd></dl>
-
- <hr>
- Methods inherited from <a href="torch.nn.modules.module.html#Module">torch.nn.modules.module.Module</a>:<br>
- <dl><dt><a name="BCNN-__call__"><strong>__call__</strong></a>(self, *input, **kwargs)</dt><dd><tt>Call self as a function.</tt></dd></dl>
-
- <dl><dt><a name="BCNN-__delattr__"><strong>__delattr__</strong></a>(self, name)</dt><dd><tt>Implement delattr(self, name).</tt></dd></dl>
-
- <dl><dt><a name="BCNN-__dir__"><strong>__dir__</strong></a>(self)</dt><dd><tt><a href="#BCNN-__dir__">__dir__</a>() -> list<br>
- default dir() implementation</tt></dd></dl>
-
- <dl><dt><a name="BCNN-__getattr__"><strong>__getattr__</strong></a>(self, name)</dt></dl>
-
- <dl><dt><a name="BCNN-__repr__"><strong>__repr__</strong></a>(self)</dt><dd><tt>Return repr(self).</tt></dd></dl>
-
- <dl><dt><a name="BCNN-__setattr__"><strong>__setattr__</strong></a>(self, name, value)</dt><dd><tt>Implement setattr(self, name, value).</tt></dd></dl>
-
- <dl><dt><a name="BCNN-__setstate__"><strong>__setstate__</strong></a>(self, state)</dt></dl>
-
- <dl><dt><a name="BCNN-add_module"><strong>add_module</strong></a>(self, name, module)</dt><dd><tt>Adds a child module to the current module.<br>
- <br>
- The module can be accessed as an attribute using the given name.<br>
- <br>
- Args:<br>
- name (string): name of the child module. The child module can be<br>
- accessed from this module using the given name<br>
- parameter (<a href="torch.nn.modules.module.html#Module">Module</a>): child module to be added to the module.</tt></dd></dl>
-
- <dl><dt><a name="BCNN-apply"><strong>apply</strong></a>(self, fn)</dt><dd><tt>Applies ``fn`` recursively to every submodule (as returned by ``.<a href="#BCNN-children">children</a>()``)<br>
- as well as self. Typical use includes initializing the parameters of a model<br>
- (see also :ref:`torch-nn-init`).<br>
- <br>
- Args:<br>
- fn (:class:`<a href="torch.nn.modules.module.html#Module">Module</a>` -> None): function to be applied to each submodule<br>
- <br>
- Returns:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: self<br>
- <br>
- Example:<br>
- >>> def init_weights(m):<br>
- >>> print(m)<br>
- >>> if <a href="#BCNN-type">type</a>(m) == nn.Linear:<br>
- >>> m.weight.data.fill_(1.0)<br>
- >>> print(m.weight)<br>
- >>><br>
- >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))<br>
- >>> net.<a href="#BCNN-apply">apply</a>(init_weights)<br>
- Linear (2 -> 2)<br>
- Parameter containing:<br>
- 1 1<br>
- 1 1<br>
- [torch.FloatTensor of size 2x2]<br>
- Linear (2 -> 2)<br>
- Parameter containing:<br>
- 1 1<br>
- 1 1<br>
- [torch.FloatTensor of size 2x2]<br>
- Sequential (<br>
- (0): Linear (2 -> 2)<br>
- (1): Linear (2 -> 2)<br>
- )</tt></dd></dl>
-
- <dl><dt><a name="BCNN-children"><strong>children</strong></a>(self)</dt><dd><tt>Returns an iterator over immediate children modules.<br>
- <br>
- Yields:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: a child module</tt></dd></dl>
-
- <dl><dt><a name="BCNN-cpu"><strong>cpu</strong></a>(self)</dt><dd><tt>Moves all model parameters and buffers to the CPU.<br>
- <br>
- Returns:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: self</tt></dd></dl>
-
- <dl><dt><a name="BCNN-cuda"><strong>cuda</strong></a>(self, device=None)</dt><dd><tt>Moves all model parameters and buffers to the GPU.<br>
- <br>
- This also makes associated parameters and buffers different objects. So<br>
- it should be called before constructing optimizer if the module will<br>
- live on GPU while being optimized.<br>
- <br>
- Arguments:<br>
- device (int, optional): if specified, all parameters will be<br>
- copied to that device<br>
- <br>
- Returns:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: self</tt></dd></dl>
-
- <dl><dt><a name="BCNN-double"><strong>double</strong></a>(self)</dt><dd><tt>Casts all parameters and buffers to double datatype.<br>
- <br>
- Returns:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: self</tt></dd></dl>
-
- <dl><dt><a name="BCNN-eval"><strong>eval</strong></a>(self)</dt><dd><tt>Sets the module in evaluation mode.<br>
- <br>
- This has any effect only on modules such as Dropout or BatchNorm.</tt></dd></dl>
-
- <dl><dt><a name="BCNN-float"><strong>float</strong></a>(self)</dt><dd><tt>Casts all parameters and buffers to float datatype.<br>
- <br>
- Returns:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: self</tt></dd></dl>
-
- <dl><dt><a name="BCNN-half"><strong>half</strong></a>(self)</dt><dd><tt>Casts all parameters and buffers to half datatype.<br>
- <br>
- Returns:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: self</tt></dd></dl>
-
- <dl><dt><a name="BCNN-load_state_dict"><strong>load_state_dict</strong></a>(self, state_dict, strict=True)</dt><dd><tt>Copies parameters and buffers from :attr:`state_dict` into<br>
- this module and its descendants. If :attr:`strict` is ``True`` then<br>
- the keys of :attr:`state_dict` must exactly match the keys returned<br>
- by this module's :func:`<a href="#BCNN-state_dict">state_dict</a>()` function.<br>
- <br>
- Arguments:<br>
- state_dict (dict): A dict containing parameters and<br>
- persistent buffers.<br>
- strict (bool): Strictly enforce that the keys in :attr:`state_dict`<br>
- match the keys returned by this module's `:func:`<a href="#BCNN-state_dict">state_dict</a>()`<br>
- function.</tt></dd></dl>
-
- <dl><dt><a name="BCNN-modules"><strong>modules</strong></a>(self)</dt><dd><tt>Returns an iterator over all modules in the network.<br>
- <br>
- Yields:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: a module in the network<br>
- <br>
- Note:<br>
- Duplicate modules are returned only once. In the following<br>
- example, ``l`` will be returned only once.<br>
- <br>
- >>> l = nn.Linear(2, 2)<br>
- >>> net = nn.Sequential(l, l)<br>
- >>> for idx, m in enumerate(net.<a href="#BCNN-modules">modules</a>()):<br>
- >>> print(idx, '->', m)<br>
- 0 -> Sequential (<br>
- (0): Linear (2 -> 2)<br>
- (1): Linear (2 -> 2)<br>
- )<br>
- 1 -> Linear (2 -> 2)</tt></dd></dl>
-
- <dl><dt><a name="BCNN-named_children"><strong>named_children</strong></a>(self)</dt><dd><tt>Returns an iterator over immediate children modules, yielding both<br>
- the name of the module as well as the module itself.<br>
- <br>
- Yields:<br>
- (string, <a href="torch.nn.modules.module.html#Module">Module</a>): Tuple containing a name and child module<br>
- <br>
- Example:<br>
- >>> for name, module in model.<a href="#BCNN-named_children">named_children</a>():<br>
- >>> if name in ['conv4', 'conv5']:<br>
- >>> print(module)</tt></dd></dl>
-
- <dl><dt><a name="BCNN-named_modules"><strong>named_modules</strong></a>(self, memo=None, prefix='')</dt><dd><tt>Returns an iterator over all modules in the network, yielding<br>
- both the name of the module as well as the module itself.<br>
- <br>
- Yields:<br>
- (string, <a href="torch.nn.modules.module.html#Module">Module</a>): Tuple of name and module<br>
- <br>
- Note:<br>
- Duplicate modules are returned only once. In the following<br>
- example, ``l`` will be returned only once.<br>
- <br>
- >>> l = nn.Linear(2, 2)<br>
- >>> net = nn.Sequential(l, l)<br>
- >>> for idx, m in enumerate(net.<a href="#BCNN-named_modules">named_modules</a>()):<br>
- >>> print(idx, '->', m)<br>
- 0 -> ('', Sequential (<br>
- (0): Linear (2 -> 2)<br>
- (1): Linear (2 -> 2)<br>
- ))<br>
- 1 -> ('0', Linear (2 -> 2))</tt></dd></dl>
-
- <dl><dt><a name="BCNN-named_parameters"><strong>named_parameters</strong></a>(self, memo=None, prefix='')</dt><dd><tt>Returns an iterator over module parameters, yielding both the<br>
- name of the parameter as well as the parameter itself<br>
- <br>
- Yields:<br>
- (string, Parameter): Tuple containing the name and parameter<br>
- <br>
- Example:<br>
- >>> for name, param in self.<a href="#BCNN-named_parameters">named_parameters</a>():<br>
- >>> if name in ['bias']:<br>
- >>> print(param.size())</tt></dd></dl>
-
- <dl><dt><a name="BCNN-parameters"><strong>parameters</strong></a>(self)</dt><dd><tt>Returns an iterator over module parameters.<br>
- <br>
- This is typically passed to an optimizer.<br>
- <br>
- Yields:<br>
- Parameter: module parameter<br>
- <br>
- Example:<br>
- >>> for param in model.<a href="#BCNN-parameters">parameters</a>():<br>
- >>> print(<a href="#BCNN-type">type</a>(param.data), param.size())<br>
- <class 'torch.FloatTensor'> (20L,)<br>
- <class 'torch.FloatTensor'> (20L, 1L, 5L, 5L)</tt></dd></dl>
-
- <dl><dt><a name="BCNN-register_backward_hook"><strong>register_backward_hook</strong></a>(self, hook)</dt><dd><tt>Registers a backward hook on the module.<br>
- <br>
- The hook will be called every time the gradients with respect to module<br>
- inputs are computed. The hook should have the following signature::<br>
- <br>
- hook(module, grad_input, grad_output) -> Tensor or None<br>
- <br>
- The :attr:`grad_input` and :attr:`grad_output` may be tuples if the<br>
- module has multiple inputs or outputs. The hook should not modify its<br>
- arguments, but it can optionally return a new gradient with respect to<br>
- input that will be used in place of :attr:`grad_input` in subsequent<br>
- computations.<br>
- <br>
- Returns:<br>
- :class:`torch.utils.hooks.RemovableHandle`:<br>
- a handle that can be used to remove the added hook by calling<br>
- ``handle.remove()``</tt></dd></dl>
-
- <dl><dt><a name="BCNN-register_buffer"><strong>register_buffer</strong></a>(self, name, tensor)</dt><dd><tt>Adds a persistent buffer to the module.<br>
- <br>
- This is typically used to register a buffer that should not to be<br>
- considered a model parameter. For example, BatchNorm's ``running_mean``<br>
- is not a parameter, but is part of the persistent state.<br>
- <br>
- Buffers can be accessed as attributes using given names.<br>
- <br>
- Args:<br>
- name (string): name of the buffer. The buffer can be accessed<br>
- from this module using the given name<br>
- tensor (Tensor): buffer to be registered.<br>
- <br>
- Example:<br>
- >>> self.<a href="#BCNN-register_buffer">register_buffer</a>('running_mean', torch.zeros(num_features))</tt></dd></dl>
-
- <dl><dt><a name="BCNN-register_forward_hook"><strong>register_forward_hook</strong></a>(self, hook)</dt><dd><tt>Registers a forward hook on the module.<br>
- <br>
- The hook will be called every time after :func:`forward` has computed an output.<br>
- It should have the following signature::<br>
- <br>
- hook(module, input, output) -> None<br>
- <br>
- The hook should not modify the input or output.<br>
- <br>
- Returns:<br>
- :class:`torch.utils.hooks.RemovableHandle`:<br>
- a handle that can be used to remove the added hook by calling<br>
- ``handle.remove()``</tt></dd></dl>
-
- <dl><dt><a name="BCNN-register_forward_pre_hook"><strong>register_forward_pre_hook</strong></a>(self, hook)</dt><dd><tt>Registers a forward pre-hook on the module.<br>
- <br>
- The hook will be called every time before :func:`forward` is invoked.<br>
- It should have the following signature::<br>
- <br>
- hook(module, input) -> None<br>
- <br>
- The hook should not modify the input.<br>
- <br>
- Returns:<br>
- :class:`torch.utils.hooks.RemovableHandle`:<br>
- a handle that can be used to remove the added hook by calling<br>
- ``handle.remove()``</tt></dd></dl>
-
- <dl><dt><a name="BCNN-register_parameter"><strong>register_parameter</strong></a>(self, name, param)</dt><dd><tt>Adds a parameter to the module.<br>
- <br>
- The parameter can be accessed as an attribute using given name.<br>
- <br>
- Args:<br>
- name (string): name of the parameter. The parameter can be accessed<br>
- from this module using the given name<br>
- parameter (Parameter): parameter to be added to the module.</tt></dd></dl>
-
- <dl><dt><a name="BCNN-share_memory"><strong>share_memory</strong></a>(self)</dt></dl>
-
- <dl><dt><a name="BCNN-state_dict"><strong>state_dict</strong></a>(self, destination=None, prefix='', keep_vars=False)</dt><dd><tt>Returns a dictionary containing a whole state of the module.<br>
- <br>
- Both parameters and persistent buffers (e.g. running averages) are<br>
- included. Keys are corresponding parameter and buffer names.<br>
- <br>
- When keep_vars is ``True``, it returns a Variable for each parameter<br>
- (rather than a Tensor).<br>
- <br>
- Args:<br>
- destination (dict, optional):<br>
- if not None, the return dictionary is stored into destination.<br>
- Default: None<br>
- prefix (string, optional): Adds a prefix to the key (name) of every<br>
- parameter and buffer in the result dictionary. Default: ''<br>
- keep_vars (bool, optional): if ``True``, returns a Variable for each<br>
- parameter. If ``False``, returns a Tensor for each parameter.<br>
- Default: ``False``<br>
- <br>
- Returns:<br>
- dict:<br>
- a dictionary containing a whole state of the module<br>
- <br>
- Example:<br>
- >>> module.<a href="#BCNN-state_dict">state_dict</a>().keys()<br>
- ['bias', 'weight']</tt></dd></dl>
-
- <dl><dt><a name="BCNN-train"><strong>train</strong></a>(self, mode=True)</dt><dd><tt>Sets the module in training mode.<br>
- <br>
- This has any effect only on modules such as Dropout or BatchNorm.<br>
- <br>
- Returns:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: self</tt></dd></dl>
-
- <dl><dt><a name="BCNN-type"><strong>type</strong></a>(self, dst_type)</dt><dd><tt>Casts all parameters and buffers to dst_type.<br>
- <br>
- Arguments:<br>
- dst_type (type or string): the desired type<br>
- <br>
- Returns:<br>
- <a href="torch.nn.modules.module.html#Module">Module</a>: self</tt></dd></dl>
-
- <dl><dt><a name="BCNN-zero_grad"><strong>zero_grad</strong></a>(self)</dt><dd><tt>Sets gradients of all model parameters to zero.</tt></dd></dl>
-
- <hr>
- Data descriptors inherited from <a href="torch.nn.modules.module.html#Module">torch.nn.modules.module.Module</a>:<br>
- <dl><dt><strong>__dict__</strong></dt>
- <dd><tt>dictionary for instance variables (if defined)</tt></dd>
- </dl>
- <dl><dt><strong>__weakref__</strong></dt>
- <dd><tt>list of weak references to the object (if defined)</tt></dd>
- </dl>
- <hr>
- Data and other attributes inherited from <a href="torch.nn.modules.module.html#Module">torch.nn.modules.module.Module</a>:<br>
- <dl><dt><strong>dump_patches</strong> = False</dl>
-
- </td></tr></table> <p>
- <table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
- <tr bgcolor="#ffc8d8">
- <td colspan=3 valign=bottom> <br>
- <font color="#000000" face="helvetica, arial"><a name="BCNNManager">class <strong>BCNNManager</strong></a>(<a href="builtins.html#object">builtins.object</a>)</font></td></tr>
-
- <tr bgcolor="#ffc8d8"><td rowspan=2><tt> </tt></td>
- <td colspan=2><tt>Manager class to train bilinear CNN.<br>
- <br>
- Attributes:<br>
- _options: Hyperparameters.<br>
- _path: Useful paths.<br>
- _net: Bilinear CNN.<br>
- _criterion: Cross-entropy loss.<br>
- _solver: SGD with momentum.<br>
- _scheduler: Reduce learning rate by a fator of 0.1 when plateau.<br>
- _train_loader: Training data.<br>
- _test_loader: Testing data.<br> </tt></td></tr>
- <tr><td> </td>
- <td width="100%">Methods defined here:<br>
- <dl><dt><a name="BCNNManager-__init__"><strong>__init__</strong></a>(self, options, path)</dt><dd><tt>Prepare the network, criterion, solver, and data.<br>
- <br>
- Args:<br>
- options, dict: Hyperparameters.</tt></dd></dl>
-
- <dl><dt><a name="BCNNManager-getStat"><strong>getStat</strong></a>(self)</dt><dd><tt>Get the mean and std value for a certain dataset.</tt></dd></dl>
-
- <dl><dt><a name="BCNNManager-train"><strong>train</strong></a>(self)</dt><dd><tt>Train the network.</tt></dd></dl>
-
- <hr>
- Data descriptors defined here:<br>
- <dl><dt><strong>__dict__</strong></dt>
- <dd><tt>dictionary for instance variables (if defined)</tt></dd>
- </dl>
- <dl><dt><strong>__weakref__</strong></dt>
- <dd><tt>list of weak references to the object (if defined)</tt></dd>
- </dl>
- </td></tr></table></td></tr></table><p>
- <table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
- <tr bgcolor="#55aa55">
- <td colspan=3 valign=bottom> <br>
- <font color="#ffffff" face="helvetica, arial"><big><strong>Data</strong></big></font></td></tr>
-
- <tr><td bgcolor="#55aa55"><tt> </tt></td><td> </td>
- <td width="100%"><strong>__all__</strong> = ['BCNN', 'BCNNManager']<br>
- <strong>__copyright__</strong> = '2018 LAMDA'<br>
- <strong>__email__</strong> = 'zhangh0214@gmail.com'<br>
- <strong>__license__</strong> = 'CC BY-SA 3.0'<br>
- <strong>__status__</strong> = 'Development'<br>
- <strong>__updated__</strong> = '2018-01-13'</td></tr></table><p>
- <table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
- <tr bgcolor="#7799ee">
- <td colspan=3 valign=bottom> <br>
- <font color="#ffffff" face="helvetica, arial"><big><strong>Author</strong></big></font></td></tr>
-
- <tr><td bgcolor="#7799ee"><tt> </tt></td><td> </td>
- <td width="100%">Hao Zhang</td></tr></table>
- </body></html>
|