| Trees | Indices | Help |
|
|---|
|
|
object --+
|
utilities.FileUtils --+
|
XVG
Class that represents the numerical data in a grace xvg file.
All data must be numerical. :const:`NAN` and :const:`INF` values are
supported via python's :func:`float` builtin function.
The :attr:`~XVG.array` attribute can be used to access the the
array once it has been read and parsed. The :attr:`~XVG.ma`
attribute is a numpy masked array (good for plotting).
Conceptually, the file on disk and the XVG instance are considered the same
data. Whenever the filename for I/O (:meth:`XVG.read` and :meth:`XVG.write`) is
changed then the filename associated with the instance is also changed to reflect
the association between file and instance.
With the *permissive* = ``True`` flag one can instruct the file reader to skip
unparseable lines. In this case the line numbers of the skipped lines are stored
in :attr:`XVG.corrupted_lineno`.
A number of attributes are defined to give quick access to simple statistics such as
- :attr:`~XVG.mean`: mean of all data columns
- :attr:`~XVG.std`: standard deviation
- :attr:`~XVG.min`: minimum of data
- :attr:`~XVG.max`: maximum of data
- :attr:`~XVG.error`: error on the mean, taking correlation times into
account (see also :meth:`XVG.set_correlparameters`)
- :attr:`~XVG.tc`: correlation time of the data (assuming a simple
exponential decay of the fluctuations around the mean)
These attributes are numpy arrays that correspond to the data columns,
i.e. :attr:`XVG.array`[1:].
.. Note:: - Only simple XY or NXY files are currently supported, *not*
Grace files that contain multiple data sets separated by '&'.
- Any kind of formatting (i.e. :program:`xmgrace` commands) is discarded.
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
Inherited from Inherited from Inherited from |
|||
|
|||
|
Inherited from |
|||
|
|||
|
array Represent xvg data as a (cached) numpy array. |
|||
|
ma Represent data as a masked array. |
|||
|
mean Mean value of all data columns. |
|||
|
std Standard deviation from the mean of all data columns. |
|||
|
min Minimum of the data columns. |
|||
|
max Maximum of the data columns. |
|||
|
error Error on the mean of the data, taking the correlation time into account. |
|||
|
tc Correlation time of the data. |
|||
|
Inherited from |
|||
|
|||
Initialize the class from a xvg file.
:Arguments:
*filename*
is the xvg file; it can only be of type XY or
NXY. If it is supplied then it is read and parsed
when :attr:`XVG.array` is accessed.
*names*
optional labels for the columns (currently only
written as comments to file); string with columns
separated by commas or a list of strings
*permissive*
``False`` raises a :exc:`ValueError` and logs and errior
when encountering data lines that it cannot parse.
``True`` ignores those lines and logs a warning---this is
a risk because it might read a corrupted input file [``False``]
*savedata*
``True`` includes the data (:attr:`XVG.array`` and
associated caches) when the instance is pickled (see
:mod:`pickle`); this is oftens not desirable because the
data are already on disk (the xvg file *filename*) and the
resulting pickle file can become very big. ``False`` omits
those data from a pickle. [``False``]
|
Write array to xvg file filename in NXY format. Note Only plain files working at the moment, not compressed. |
Correlation "time" of data. The 0-th column of the data is interpreted as a time and the decay of the data is computed from the autocorrelation function (using FFT). |
Set and change the parameters for calculations involving correlation functions.
:Keywords:
*nstep*
only process every *nstep* data point to speed up the FFT; if
left empty a default is chosen that produces roughly 25,000 data
points (or whatever is set in *ncorrel*)
*ncorrel*
If no *nstep* is supplied, aim at using *ncorrel* data points for
the FFT; sets :attr:`XVG.ncorrel`.
*force*
force recalculating correlation data even if cached values are
available
*kwargs*
see :func:`numkit.timeseries.tcorrel` for other options
.. SeeAlso: :attr:`XVG.error` for details and references.
|
Read and cache the file as a numpy array. The array is returned with column-first indexing, i.e. for a data file with columns X Y1 Y2 Y3 ... the array a will be a[0] = X, a[1] = Y1, ... . |
Set the array data from a (i.e. completely replace). No sanity checks at the moment... |
Plot xvg file data.
The first column of the data is always taken as the abscissa
X. Additional columns are plotted as ordinates Y1, Y2, ...
In the special case that there is only a single column then this column
is plotted against the index, i.e. (N, Y).
:Keywords:
*columns* : list
Select the columns of the data to be plotted; the list
is used as a numpy.array extended slice. The default is
to use all columns. Columns are selected *after* a transform.
*transform* : function
function ``transform(array) -> array`` which transforms
the original array; must return a 2D numpy array of
shape [X, Y1, Y2, ...] where X, Y1, ... are column
vectors. By default the transformation is the
identity [``lambda x: x``].
*maxpoints* : int
limit the total number of data points; matplotlib has issues processing
png files with >100,000 points and pdfs take forever to display. Set to
``None`` if really all data should be displayed. At the moment we simply
subsample the data at regular intervals. [10000]
*kwargs*
All other keyword arguments are passed on to :func:`pylab.plot`.
|
Quick hack: errorbar plot. Set columns to select [x, y, dy]. |
custom pickling protocol: http://docs.python.org/library/pickle.html If :attr:`XVG.savedata` is ``False`` then any attributes in :attr:`XVG.__pickle_excluded` are *not* pickled as they are but simply pickled with the default value. |
|
|||
arrayRepresent xvg data as a (cached) numpy array. The array is returned with column-first indexing, i.e. for a data file with columns X Y1 Y2 Y3 ... the array a will be a[0] = X, a[1] = Y1, ... .
|
maRepresent data as a masked array. The array is returned with column-first indexing, i.e. for a data file with columns X Y1 Y2 Y3 ... the array a will be a[0] = X, a[1] = Y1, ... . inf and nan are filtered via :func:`numpy.isfinite`.
|
meanMean value of all data columns.
|
stdStandard deviation from the mean of all data columns.
|
minMinimum of the data columns.
|
maxMaximum of the data columns.
|
errorError on the mean of the data, taking the correlation time into account. See Frenkel and Smit, Academic Press, San Diego 2002, p526: error = sqrt(2*tc*acf[0]/T) where acf() is the autocorrelation function of the fluctuations around the mean, y-<y>, tc is the correlation time, and T the total length of the simulation.
|
tcCorrelation time of the data. See :meth:`XVG.error` for details.
|
| Trees | Indices | Help |
|
|---|
| Generated by Epydoc 3.0.1 on Sat Jun 12 15:59:36 2010 | http://epydoc.sourceforge.net |