[Thuban-commits] r2889 - in trunk/thuban: . Thuban/Model Thuban/UI libraries/pyshapelib libraries/shapelib test
scm-commit@wald.intevation.org
scm-commit at wald.intevation.org
Sun Sep 27 22:36:27 CEST 2009
Author: bramz
Date: 2009-09-27 22:36:21 +0200 (Sun, 27 Sep 2009)
New Revision: 2889
Added:
trunk/thuban/libraries/pyshapelib/dbflibmodule.c
trunk/thuban/libraries/pyshapelib/pyshapelib_common.h
trunk/thuban/libraries/pyshapelib/shapelibmodule.c
trunk/thuban/libraries/shapelib/safileio.c
Removed:
trunk/thuban/libraries/pyshapelib/dbflib.i
trunk/thuban/libraries/pyshapelib/dbflib.py
trunk/thuban/libraries/pyshapelib/dbflib_wrap.c
trunk/thuban/libraries/pyshapelib/shapelib.i
trunk/thuban/libraries/pyshapelib/shapelib.py
trunk/thuban/libraries/pyshapelib/shapelib_wrap.c
Modified:
trunk/thuban/ChangeLog
trunk/thuban/Thuban/Model/table.py
trunk/thuban/Thuban/UI/baserenderer.py
trunk/thuban/Thuban/UI/controls.py
trunk/thuban/Thuban/UI/labeldialog.py
trunk/thuban/Thuban/UI/tableview.py
trunk/thuban/Thuban/UI/view.py
trunk/thuban/libraries/pyshapelib/ChangeLog
trunk/thuban/libraries/pyshapelib/NEWS
trunk/thuban/libraries/pyshapelib/README
trunk/thuban/libraries/pyshapelib/pyshapelib_api.h
trunk/thuban/libraries/pyshapelib/pytest.py
trunk/thuban/libraries/pyshapelib/setup.py
trunk/thuban/libraries/pyshapelib/shptreemodule.c
trunk/thuban/libraries/pyshapelib/testdbf.py
trunk/thuban/libraries/shapelib/dbfopen.c
trunk/thuban/libraries/shapelib/shapefil.h
trunk/thuban/libraries/shapelib/shpopen.c
trunk/thuban/libraries/shapelib/shptree.c
trunk/thuban/setup.py
trunk/thuban/test/test_load.py
trunk/thuban/test/test_load_1_0.py
Log:
reintegrating WIP-pyshapelib-Unicode branch (r2888) in trunk
Modified: trunk/thuban/ChangeLog
===================================================================
--- trunk/thuban/ChangeLog 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/ChangeLog 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,3 +1,13 @@
+2009-08-26 Bram de Greve <bram.degreve at bramz.net>
+
+ * Reintegrating WIP-pyshapelib-Unicode branch (r2888) in trunk
+
+2009-08-18 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * Forward porting trunk (2864:2885] to WIP-pyshapelib-Unicode branch.
+
2009-08-18 Didrik Pinte <dpinte at dipole-consulting.com>
* NEWS : updated to 1.2.2 release
@@ -137,6 +147,20 @@
* libraries/pyshapelib/shptreemodule.c : bugfix coming from Debian
see http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=469007
+2008-02-14 Bernhard Reiter <bernhard at intevation.de>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * Thuban/Model/table.py: Using internal_from_unicode() when reading
+ dbflib column names and unicode_from_internal() when writing them.
+
+ * test/test_load_1_0.py(TestNonAsciiColumnName),
+ test/test_load.py(TestNonAsciiColumnName): Compare result
+ of layer.GetClassificationColumn() with string in internal encoding,
+ this makes this test work with --internal-encoding=uncode.
+ Given that the new dbflib should deal with unicode objects,
+ call dbf.add_field with an unicode object.
+
2008-02-13 Bernhard Reiter <bernhard at intevation.de>
* test/test_save.py, Thuban/Model/load.py, setup.py:
@@ -146,6 +170,12 @@
* Doc/technotes/release_process.txt: Better use thuban-announce@
to announce. ;)
+2008-02-04 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * Forward porting trunk (2819:2834] to WIP-pyshapelib-Unicode branch.
+
2008-02-03 Bernhard Reiter <bernhard at intevation.de>
* Thuban/version.py, setup.py: Changing version number back to svn
@@ -247,6 +277,12 @@
* po/*: Run the update (basically a date change)
+2008-01-29 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * Forward porting trunk (2807:2819] to WIP-pyshapelib-Unicode branch.
+
2008-01-29 Bernhard Reiter <bernhard at intevation.de>
* NEWS: Updated changes up to upcoming release 1.2.1.
@@ -361,6 +397,25 @@
thuban_cfg.py and different handling of Extensions. Bumped copyright
to include 2008.
+2008-01-17 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * setup.py: updated for pyshapelib
+
+2008-01-16 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * Forward porting trunk (2801:2807] to WIP-pyshapelib-Unicode branch.
+ The previous merge was erroneously indicated as (2793:2793],
+ that should have been (2793:2801].
+
+ * libraries/shapelib: forward ported from cvs.maptools.org:
+ shapefil.h v1.44, safileio.c v1.4
+
+ * setup.py: updated for pyshapelib
+
2008-01-09 Bernhard Reiter <bernhard at intevation.de>
Making Thuban robust against shapefiles which contain empty shapes.
@@ -378,12 +433,68 @@
* Thuban/Model/layer.py, Thuban/UI/viewport.py: removed unused import
of point_in_polygon_shape.
+2008-01-09 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * forward ported trunk (2793:2793] to WIP-pyshapelib-Unicode
+ branch
+
+ * libraries/shapelib: forward ported from cvs.maptools.org:
+ shpopen.c v1.58, dbfopen.c v1.81, safileio.c v1.3
+
+ * setup.py: updated for pyshapelib
+
+ * Thuban/Model/table.py: use LDID_ESRI_ANSI as code page
+ when creating .dbf files.
+
2008-01-08 Bernhard Reiter <bernhard at intevation.de>
* libraries/thuban/wxproj.cpp(project_points): made function robust
against being called with no points. (also fixes main defect related
to [#571] (MemoryError when adding a particular shp layer))
+2008-01-03 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * added support for UTF-8 filenames to shapelib (currently
+ Thuban specific, but this should go upstream soon), and applied
+ it to pyshapelib. See libraries/pyshapelib/ChangeLog
+
+ * setup.py: removed the HAVE_CODE_PAGE macro from the pyshapelib
+ extension.
+
+2007-12-15 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelib and pyshapelib Unicode support now read the .CPG
+ file so that we finally can use UTF-8 content. See ChangeLog
+ in pyshapelib
+
+ * Thuban/Model/table.py: when opening DBF files, ask to return
+ Unicode strings, and use the UTF-8 encoding when creating new
+ shapefiles.
+
+2007-12-12 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ Porting shapelib from maptools source cvs.maptools.org.
+ Currently, this will have no support for code pages and wide
+ character filenames (Win32), but we'll get that back in later,
+ parallel with the cvs.maptools.org source tree. There might be
+ some bugfixes from our side missing, but we'll sort that out later
+ as well.
+
+ * libraries/shapelib: Revisions on cvs.maptools.org that have been
+ ported from: shapefil.h v1.40, shpopen.c v1.57, dbfopen.c v1.76,
+ shptree.c v1.11, safileio.c v1.1.
+
+ * setup.py, libraries/pyshapelib/setup.py: add safileio.c to build
+ process and use the dbf_macros() in setup.py as well.
+
2007-12-09 Bernhard Reiter <bernhard at intevation.de>
WMS Extention improvements: Better logging for problem analysis.
@@ -706,6 +817,26 @@
* MANIFEST.in: Added toplevel ChangeLog to distribution.
Made sure that files under packaging are actually packaged.
+2007-04-12 Didrik Pinte <dpinte at itae.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * Removed workaround for file encoding in the Thuban code
+
+2007-03-14 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * setup.py : updated in order to use the new dbflib C module in place
+ of the old SWIG one
+
+2007-03-12 Didrik Pinte <dpinte at itae.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * setup.py : updated in order to use the new pyshapelib C module in place
+ of the old SWIG one
+
2007-02-26 Bernhard Reiter <bernhard at intevation.de>
* MANIFEST.in: Added *.xmi to Doc so that ThubanModel.xmi is included.
Modified: trunk/thuban/Thuban/Model/table.py
===================================================================
--- trunk/thuban/Thuban/Model/table.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/Thuban/Model/table.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -19,6 +19,8 @@
from base import TitledObject
+from Thuban import internal_from_unicode, unicode_from_internal
+
import dbflib
# the field types supported by a Table instance.
@@ -86,7 +88,7 @@
title = os.path.splitext(os.path.basename(self.filename))[0]
TitledObject.__init__(self, title)
- self.dbf = dbflib.DBFFile(filename)
+ self.dbf = dbflib.open(filename, return_unicode = True)
# If true, self.dbf is open for writing.
self._writable = 0
@@ -96,8 +98,10 @@
self.column_map = {}
for i in range(self.NumColumns()):
ftype, name, width, prec = self.dbf.field_info(i)
+ name = internal_from_unicode(name)
ftype = dbflib_fieldtypes[ftype]
index = len(self.columns)
+
col = DBFColumn(name, ftype, width, prec, index)
self.columns.append(col)
self.column_map[name] = col
@@ -226,7 +230,7 @@
order.
"""
if not self._writable:
- new_dbf = dbflib.DBFFile(self.filename, "r+b")
+ new_dbf = dbflib.open(self.filename, "r+b", return_unicode = True)
self.dbf.close()
self.dbf = new_dbf
self._writable = 1
@@ -462,7 +466,7 @@
indices to be saved to the file, otherwise all rows are saved.
"""
- dbf = dbflib.create(filename)
+ dbf = dbflib.create(filename, code_page = dbflib.LDID_ESRI_ANSI, return_unicode = True)
dbflib_fieldtypes = {FIELDTYPE_STRING: dbflib.FTString,
FIELDTYPE_INT: dbflib.FTInteger,
@@ -478,7 +482,8 @@
prec = getattr(col, "prec", 12)
else:
prec = 0
- dbf.add_field(name_map[col.name], dbflib_fieldtypes[col.type],
+ dbf.add_field(unicode_from_internal(name_map[col.name]),
+ dbflib_fieldtypes[col.type],
width, prec)
if rows is None:
Modified: trunk/thuban/Thuban/UI/baserenderer.py
===================================================================
--- trunk/thuban/Thuban/UI/baserenderer.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/Thuban/UI/baserenderer.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -583,7 +583,7 @@
x, y = forward(x, y)
x = int(round(x * scale + offx))
y = int(round(-y * scale + offy))
- width, height = self.dc.GetTextExtent(text.decode('iso-8859-1'))
+ width, height = self.dc.GetTextExtent(text)
if label.halign == ALIGN_LEFT:
# nothing to be done
pass
@@ -598,4 +598,4 @@
y = y - height
elif label.valign == ALIGN_CENTER:
y = y - height/2
- self.dc.DrawText(text.decode('iso-8859-1'), x, y)
+ self.dc.DrawText(text, x, y)
Modified: trunk/thuban/Thuban/UI/controls.py
===================================================================
--- trunk/thuban/Thuban/UI/controls.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/Thuban/UI/controls.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -51,7 +51,7 @@
name = names[i]
value = record[name]
self.InsertStringItem(i, name)
- self.SetStringItem(i, 1, str(value).decode('iso-8859-1'))
+ self.SetStringItem(i, 1, unicode(value))
values[i] = value
self.values = values
Modified: trunk/thuban/Thuban/UI/labeldialog.py
===================================================================
--- trunk/thuban/Thuban/UI/labeldialog.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/Thuban/UI/labeldialog.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -53,7 +53,7 @@
def OnOK(self, event):
result = self.list.GetValue()
if result is not None:
- self.end_dialog(wx.ID_OK, str(result))
+ self.end_dialog(wx.ID_OK, result)
else:
self.end_dialog(wx.ID_CANCEL, None)
Modified: trunk/thuban/Thuban/UI/tableview.py
===================================================================
--- trunk/thuban/Thuban/UI/tableview.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/Thuban/UI/tableview.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -78,10 +78,7 @@
record = dict()
for (key, val) in self.table.ReadRowAsDict(row, \
row_is_ordinal = 1).items():
- if isinstance(val, str):
- record[key] = val.decode('iso-8859-1')
- else:
- record[key] = val
+ record[key] = val
return record[self.columns[col][0]]
def SetValue(self, row, col, value):
Modified: trunk/thuban/Thuban/UI/view.py
===================================================================
--- trunk/thuban/Thuban/UI/view.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/Thuban/UI/view.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -452,7 +452,7 @@
dc = wx.ClientDC(self)
font = wx.Font(10, wx.SWISS, wx.NORMAL, wx.NORMAL)
dc.SetFont(font)
- return dc.GetTextExtent(text.decode('iso-8859-1'))
+ return dc.GetTextExtent(text)
def LabelShapeAt(self, x, y, text=None):
"""Add or remove a label at window position x, y.
Modified: trunk/thuban/libraries/pyshapelib/ChangeLog
===================================================================
--- trunk/thuban/libraries/pyshapelib/ChangeLog 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/ChangeLog 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,8 +1,285 @@
+2009-09-26 Bram de Greve <bram.degreve at bramz.net>
+
+ reintegrating WIP-pyshapelib-Unicode branch (r2888) in trunk
+
+2009-09-17 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * dbflibmodule.c: add delete_field, if necessary API function is available
+ in shapefil.h
+
+2009-08-15 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * setup.py: add compiler flags to suppress a spurious warning
+
+ * dbflibmodule.c: out of index reads raise IndexError
+
+ * testdbf.py: moving dbflib test to here in unittest format (with unicode test)
+
+2008-01-17 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * pyshapelib_common.h, shapelibmodule.c, dbflibmodule.c, shptreemodule.c:
+ fixed some build warnings (gcc 3.3.5)
+
+2008-01-16 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * pyshapelib_common.h, setup.py: as shapefil.h r1.44 of cvs.maptools.org
+ now defines SHPAPI_UTF8_HOOKS, we no longer need to check for the
+ availabitity of SASetupUtf8Hooks ourselves.
+
+ * pyshapelib_common.h: only use PyOs_ascii_atof if >= Python 2.4. Otherwise
+ use the standard atof function.
+
+2008-01-15 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelibmodule.c: In shapefile_write_object, use PyArg_ParseTuple to
+ check type of object instead of running our own test.
+
+2008-01-11 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * dbflibmodule.c: dbfopen.c returns integers with width > 10 as FTDouble
+ to avoid overflow in C int. Also, all integers are read as doubles anyway
+ (dbfopen.c casts integers to int on the very last moment). Use
+ PyLong_FromDouble to convert all things with 0 decimals to long integers.
+
+2008-01-08 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelibmodule.c, dbflibmodule.c, pyshapelib_common.h:
+ - fixed copyright notice and header
+ - rename PYSHAPELIB_FILESYSTEMENCODING to PYSHAPELIB_FILENAME_ENCODING
+
+ * shapelibmodule.c, dbflibmodule.c:
+ - add _have_utf8_hooks constant with value of HAVE_UTF8_HOOKS
+
+ * dbflibmodule.c:
+ - set default code_page and codec to LDID/87 (ESRI ANSI, 0x57) and cp1252.
+ - use again HAVE_CODE_PAGE to disable code_page support when building with
+ older (but official) shapelib libraries
+ - decode ?, t, f, y, n values for the logic field type correctly.
+
+ * pyshapelib_common.h:
+ - add definitions of Py_RETURN_NONE, Py_RETURN_TRUE and Py_RETURN_FALSE
+ if they are not defined yet (for older Python versions)
+
+ * setup.py:
+ - pass HAVE_??? macros to shapelib extension as well.
+ - add HAVE_CODE_PAGE and HAVE_UTF8_HOOKS
+ - only compile and link safileio.c if it exists.
+ - updated library version and authors.
+
+ * README:
+ - removed references to SWIG
+ - mentioned compatibility issues with current official shapelib releases
+ and how to get the shapelib from the CVS instead.
+
+ * NEWS:
+ - added an entry for pyshapelib version 0.4 with all new features.
+
+2007-12-18 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * dbflibmodule.c: shapelib now has a hook to set your own atof function.
+ See http://bugzilla.maptools.org/show_bug.cgi?id=1615#c3
+ - Before opening or creating a DBF file, atof is now set to PyOS_ascii_atof.
+ - corresponding revisions of shapelib on cvs.maptools.org: shapefil.h v1.42,
+ dbfopen.c v1.78, safileio.c v1.3
+
+2007-12-15 Bram de Greve <bram.degreve at bramz.net>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * dbflibmodule.c: Unicode support mark II. Ditched are the language_driver
+ members and functions, as it is not sufficient to indicate code pages
+ specified by .CPG files.
+ - code_page: DBFFile now has member code_page that returns the DBF code page
+ as a string. This is either the content of the .CPG file, or a string of
+ the form "LDID/42" if there's no .CPG file and the LDID number of the .DBF
+ file is used to indicate the code page instead.
+ - DBFFile also sports a member codec, which is the name of the Python codec used
+ for the code page.
+ - code_page: a new optional argument on create() to specify to DBF file's
+ code page on creation. This is _not_ a Python codec name, but one the constants
+ dbflib.LDID_* and dbflib.CPG_*.
+ - return_unicode: a new optional argument on DBFFile, open() and create().
+ It tells the DBFFile to decode the textual content using its codec and
+ return it as Unicode. It is False by default, which means you get the raw
+ encoded string instead.
+ - codecs_map: a new optional argument on DBFFile, open() and create().
+ It allows you to provide your own dictionary that links DBF code pages to
+ Python codecs in case the default builtin one is not correct for your
+ application.
+ - HAVE_LANGUAGE_DRIVER is gone, and HAVE_CODE_PAGE is here instead.
+ - corresponding revions of shapelib on cvs.maptools.org: shapefil.h v1.41,
+ dbfopen.c v1.77, safileio.c v1.2
+
2007-04-25 Bernhard Herzog <bh at intevation.de>
+ branches/WIP-pyshapelib-Unicode:
+
* shptreemodule.c: Fix copyright notice. It should have been
LGPL.
+2007-04-12 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ * dbflibmodule.c: Expanded Unicode support to field names (formely it was
+ only available for string values in the records. Renamed the write_field
+ function to write_attribute to be symmetric with the read_attribute
+ function that already existed.
+
+2007-04-11 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * dbflibmodule.c, pyshapelib_common.h, setup.py: attempt to add support for
+ Unicode and Language Driver ID (LDID) support in dbflib. Before the strings
+ are send to the underlying shapelib, they are encoded using the code page
+ specified by the database's LDID if present. To know this LDID requires
+ some unofficial modifications to maptools' shapelib. Backwards
+ compatibility is ensured by detecting if this field is present and setting
+ HAVE_LANGUAGE_DRIVER accordingly in setup.py. In absence of the LDID,
+ dbflib assumes a Windows ANSI codepage (cp1252).
+ New or modified functions/attributes of the DBFFile class:
+ - read_record(...), DBFFile.read_attribute(...): modified, now return
+ Unicode strings.
+ - write_record(...) and write_field(...): modified, now accept both regular
+ and Unicode strings but both are encoded.
+ - language_driver (read-only): new, the numerical value of the LDID
+ (exists only if HAVE_LANGUAGE_DRIVER == 1)
+ New or modified functions/constants of the dbflib module:
+ - language_driver_codec(...): added, translates a numerical LDID into the
+ string name of the Python codec used to encode/decode the strings.
+ (exists only if HAVE_LANGUAGE_DRIVER == 1)
+ - language_driver_name(...): added, translates a numerical LDID into a string
+ representing the corresponding constant.
+ (exists only if HAVE_LANGUAGE_DRIVER == 1)
+ - LDID_NOT_SET, LDID_DOS_USA, ...: constants representing language drivers.
+ (existsonly if HAVE_LANGUAGE_DRIVER == 1)
+
+2007-03-29 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelibmodule.c, dbflibmodule.c, pyshapelib_common.h: added support for
+ Win32 wide character file API. Unicode filenames are now fully supported
+ on the windows platform: for example exotic filenames like the greek letter
+ pi (u"\u03c0"). This is mostly mimicked from Python's fileobject.c, and
+ needed some severe changes in the C++ shapelib library to support the wide
+ filename API. All XOpen and XCreate functions now have XOpenW and XCreateW
+ counterparts plus some common code has been split to XOpenEx and XCreateEx.
+ I hope these modifications might one day end up in an official shapelib
+ release. Meanwhile, compatibility is guaranteed as the specific Unicode
+ code paths are not compiled if the modifications are not found.
+
+2007-03-22 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelibmodule.c, dbflibmodule.c: in __init__ of ShapeFile and DBFFile,
+ throw proper IOError if opening of file failed.
+
+ * dbflibmodule.c: commit function was incorrectly pointing to
+ dbflib_read_record
+
+2007-03-21 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shptreemodule.c: restoring something that shouldn't have been committed.
+
+2007-03-15 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelib.c: When creating measured shapes (XYM), treat M value
+ value as optional (defaults to zero). Similar for
+ 3D shapes (XYZM), threat both the Z and M value as options
+ (both default to zero). When M values are to be given,
+ None is accepted as "no-data" value, and is stored as zero
+ (ESRI shapefile specs define any M value smaller than 1e-38
+ as no-data). Added an unpack_vertex() function to lift some
+ of the load of shpobject_init. Fixed a missing break and
+ PyMem_Free in build_vertex_list(), shapefile_init() and
+ shapelib_create().
+
+ * dbflibmodule.c: Added support for the FTLogical field type.
+
+ * shapelibmodule.c, dbflibmodule.c: Added 'name' and 'mode'
+ keywords for ShapeFile and DBFFile constructors and the module's
+ open() function, similiar to Python's file(). Reformatted
+ the doc strings to have a standard look and feel when parsed
+ through pydoc.
+
+ * shapelib_common.h: added no-data constants.
+
+ * pytest.py: Added tests for multipatch shapefile with XYZM values.
+ Added tests for FTLogical field.
+
+2007-03-15 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelibmodule.c, dbflibmodule.c: added some Unicode support for the
+ filenames (no internal encoding for DBFFile yet). It now should similar
+ Unicode support Python's file() (concerning the filename, that is).
+
+2007-03-14 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelibmodule.c: added support for shapetypes with Z and M values
+
+2007-03-14 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * dbflibmodule.c, dbflib.i: replaced dbflib.i by dbflibmodule.c to use
+ hand-crafted Python bindings instead of SWIG generated ones
+
+ * shapelibmodule.c, shapelib.c: Renamed shapelib.c to shapelibmodule.c
+ to match style of dbflibmodule.c and shptreemodule.c. Changed some
+ (well, most) names to match same style.
+
+ * pyshapelib_common.h: do all necessary includes here
+
+ * setup.py: updated building of dbflib.
+
+2007-03-13 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelib.c, shapelib_common.h: Added part_types() to SHPObject to
+ return tuple of part types. Added __repr__ operators to return a
+ string that can reconstruct the object using eval()
+
+ * pytest.py: Added tests for part_types() and __repr__.
+ Humanized the output a bit.
+
+2007-03-12 Bram de Greve <bram.degreve at intec.ugent.be>
+
+ branches/WIP-pyshapelib-Unicode:
+
+ * shapelib.c, shapelib.i: replaced shapelib.i by shapelib.c to use
+ hand-crafted Python bindings instead of SWIG generated ones.
+
+ * pyshapelib_common.h: New file with some common stuff for both
+ shapelib and dbflib
+
+ * pyshapelib_api.h, setup.py: import/build shapelib instead of shapelibc
+
2006-09-24 Bernhard Reiter <bernhard at intevation.de>
* dbflib_wrap.c, README: Checked for python version >= 2.4.0a0
Modified: trunk/thuban/libraries/pyshapelib/NEWS
===================================================================
--- trunk/thuban/libraries/pyshapelib/NEWS 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/NEWS 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,3 +1,70 @@
+pyshapelib 0.4 (2008-??-??)
+
+Module shapelib:
+
+ * Rewritten as a hand-crafted module instead of a SWIG generated one.
+
+ * SHPObject now has a method part_types() to return a tuple of the
+ part types when appropriate. Otherwise, it returns None.
+
+ * Added support for shapetypes with Z and M values. When creating measured
+ shapes (XYM), treat M value as optional (defaults to zero).Similar for 3D
+ shapes (XYZM), treat both Z and M values as optional (both default to zero).
+ None is accepted as an M value, but it is stored as zero internally (ESRI
+ shapefile specs define any M value smaller than 1e-38 as no-data).
+
+ * Added 'name' and 'mode' keywords for ShapeFile constructors and the module's
+ open() function, similiar to Python's file().
+
+ * Unicode strings are now accepted as filenames, also on Windows.
+
+
+Module dbflib:
+
+ * Rewritten as a hand-crafted module instead of a SWIG generated one.
+
+ * DBFFile objects can be requested to return string content as unicode, via
+ the optional return_unicode argument in create(), open() and the DBFFile
+ constructor. If so, string content is decoded using the file's codepage.
+ Otherwise, the raw encoded string is returned. return_unicode is False
+ by default.
+
+ * DBFFiles now support code pages for string content. These code pages can
+ either be specified by the numerical LDID field in the .dbf file, or by an
+ additional .cpg file that contains a single string with the code page name.
+ Both systems are unified into a single string, the former is of the form
+ "LDID/nn" where nn is a _decimal_ number between 1 and 255.
+
+ When creating a DBFFile, you can specify the code page to use by the optional
+ code_page argument that defaults to "LDID/87", the ESRI ANSI code page 0x57.
+ You can inspect the code page of a DBFFile through the readonly code_page
+ member.
+
+ Code pages are associated to Python codecs through a codecs map (see below).
+ This codec is used to encode or decode string content to or from dbflib. You
+ can inspect the used codec by the readonly codec member of DBFFile.
+
+ dbflib supports a number of constants of the form LDID_??? and CPG_??? that
+ are names of code pages that are supported by the builtin codecs map.
+
+ * An optional custom codecs_map can be passed to create(), open() and the
+ DBFFile constructor to map code pages to codecs, for when the builtin codecs
+ map does not fit your needs.
+
+ The keys of this map are strings like the values of the constants LDID_???
+ and CPG_???. E.g. "LDID/87" for LDID_ESRI_ANSI (use _decimal_ values in the
+ string) or "UTF-8" for UTF-8 encoding.
+
+ The values of this map should be names of Python codecs. See Python Library
+ Reference - 4.9.2 Standard Encodings to see what codecs are builtin.
+
+ * Added 'name' and 'mode' keywords for ShapeFile constructors and the module's
+ open() function, similiar to Python's file().
+
+ * Unicode strings are now accepted as filenames, also on Windows.
+
+
+
pyshapelib 0.3 (2004-05-28)
===========================
Modified: trunk/thuban/libraries/pyshapelib/README
===================================================================
--- trunk/thuban/libraries/pyshapelib/README 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/README 2009-09-27 20:36:21 UTC (rev 2889)
@@ -13,10 +13,6 @@
Shapelib is a free software library for reading and writing ESRI shape
files and can be found at http://shapelib.maptools.org/.
-The bindings were partly created with SWIG, a tool that can generate
-wrappers of C and C++ libraries for a variety of scripting languages.
-It's homepage is http://www.swig.org.
-
The bindings themselves don't have a homepage at the moment, but the
source tarballs/zip files can be downloaded from
http://ftp.intevation.de/users/bh/pyshapelib/
@@ -28,18 +24,33 @@
To compile the bindings, you need shapelib 1.2.9 or newer and Python 2.0
or newer.
-SWIG is not required. The files generated by SWIG are contained in the
-archive. If you modify shapelib.i or dbflib.i and need to recreate the
-generated files, you need SWIG 1.3 Alpha 5. It's unlikely that other
-versions will work.
-In addition you need to add the following lines to initdbflibc(void)
-in dbflib_wrap.c.
- /* because we are in a python module now, we can give out
- * pointers to python's locale agonistic function
- * XXX this clearly is a hack
- */
- DBFSetatof_function(&PyOS_ascii_atof);
+IMPORTANT:
+To be able to use all features of pyshapelib, you'll need to grab the
+shapelib source code from the CVS, as the latest official release 1.2.10 is
+rather outdated (April 2003).
+
+cvs -d:pserver:cvsanon at cvs.maptools.org:/cvs/maptools/cvsroot co -P shapelib
+However, if you build against an official release, the build process will
+degrade gracefully. The following features will be missing when compiled
+against shapelib 1.2.10 or earlier. Their availability will be reflected by
+a constant _have_???.
+
+- codepage support for character sets other than ANSI (dbflib):
+The codepage member of DBFFile will be missing, and neither will create()
+accept the codepage parameter. The codecs_map parameter will be missing from
+open(), create() and DBFFile(), and so will be the various LDID_xxx and CPG_xxx
+constants. _have_codepage will be set to zero.
+
+- Full Unicode support for filenames on Windows (shapelib, dbflib):
+You will be restricted to ANSI character filenames. _have_utf8_hooks will be set
+to zero.
+
+- The commit() method on DBFFile (dbflib):
+This will be missing. _have_update_header will be set to zero.
+
+
+
You also need Python, of course. If you installed prebuilt packages
such as RPMs of some Linux distributions, please make sure that the
development package is also installed.
Deleted: trunk/thuban/libraries/pyshapelib/dbflib.i
===================================================================
--- trunk/thuban/libraries/pyshapelib/dbflib.i 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/dbflib.i 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,597 +0,0 @@
-/* SWIG (www.swig.org) interface file for the dbf interface of shapelib
- *
- * At the moment (Dec 2000) this file is only useful to generate Python
- * bindings. Invoke swig as follows:
- *
- * swig -python -shadow dbflib.i
- *
- * to generate dbflib_wrap.c and dbflib.py. dbflib_wrap.c defines a
- * bunch of Python-functions that wrap the appripriate dbflib functions
- * and dbflib.py contains an object oriented wrapper around
- * dbflib_wrap.c.
- *
- * This module defines one object type: DBFFile.
- */
-
-/* this is the dbflib module */
-%module dbflib
-
-/* first a %{,%} block. These blocks are copied verbatim to the
- * dbflib_wrap.c file and are not parsed by SWIG. This is the place to
- * import headerfiles and define helper-functions that are needed by the
- * automatically generated wrappers.
- */
-
-%{
-#include "shapefil.h"
-
-
-/* Read one attribute from the dbf handle and return it as a new python object
- *
- * If an error occurs, set the appropriate Python exception and return
- * NULL.
- *
- * Assume that the values of the record and field arguments are valid.
- * The name argument will be passed to DBFGetFieldInfo as is and should
- * thus be either NULL or a pointer to an array of at least 12 chars
- */
-static PyObject *
-do_read_attribute(DBFInfo * handle, int record, int field, char * name)
-{
- int type, width;
- PyObject *value;
-
- type = DBFGetFieldInfo(handle, field, name, &width, NULL);
- /* For strings NULL and the empty string are indistinguishable
- * in DBF files. We prefer empty strings instead for backwards
- * compatibility reasons because older wrapper versions returned
- * emtpy strings as empty strings.
- */
- if (type != FTString && DBFIsAttributeNULL(handle, record, field))
- {
- value = Py_None;
- Py_INCREF(value);
- }
- else
- {
- switch (type)
- {
- case FTString:
- {
- const char * temp = DBFReadStringAttribute(handle, record, field);
- if (temp)
- {
- value = PyString_FromString(temp);
- }
- else
- {
- PyErr_Format(PyExc_IOError,
- "Can't read value for row %d column %d",
- record, field);
- value = NULL;
- }
- break;
- }
- case FTInteger:
- value = PyInt_FromLong(DBFReadIntegerAttribute(handle, record,
- field));
- break;
- case FTDouble:
- value = PyFloat_FromDouble(DBFReadDoubleAttribute(handle, record,
- field));
- break;
- default:
- PyErr_Format(PyExc_TypeError, "Invalid field data type %d",
- type);
- value = NULL;
- }
- }
- if (!value)
- return NULL;
-
- return value;
-}
-
-/* the read_attribute method. Return the value of the given record and
- * field as a python object of the appropriate type.
- *
- * In case of error, set a python exception and return NULL. Since that
- * value will be returned to the python interpreter as is, the
- * interpreter should recognize the exception.
- */
-
-static PyObject *
-DBFInfo_read_attribute(DBFInfo * handle, int record, int field)
-{
- if (record < 0 || record >= DBFGetRecordCount(handle))
- {
- PyErr_Format(PyExc_ValueError,
- "record index %d out of bounds (record count: %d)",
- record, DBFGetRecordCount(handle));
- return NULL;
- }
-
- if (field < 0 || field >= DBFGetFieldCount(handle))
- {
- PyErr_Format(PyExc_ValueError,
- "field index %d out of bounds (field count: %d)",
- field, DBFGetFieldCount(handle));
- return NULL;
- }
-
- return do_read_attribute(handle, record, field, NULL);
-}
-
-
-/* the read_record method. Return the record record as a dictionary with
- * whose keys are the names of the fields, and their values as the
- * appropriate Python type.
- *
- * In case of error, set a python exception and return NULL. Since that
- * value will be returned to the python interpreter as is, the
- * interpreter should recognize the exception.
- */
-
-static PyObject *
-DBFInfo_read_record(DBFInfo * handle, int record)
-{
- int num_fields;
- int i;
- int type, width;
- char name[12];
- PyObject *dict;
- PyObject *value;
-
- if (record < 0 || record >= DBFGetRecordCount(handle))
- {
- PyErr_Format(PyExc_ValueError,
- "record index %d out of bounds (record count: %d)",
- record, DBFGetRecordCount(handle));
- return NULL;
- }
-
- dict = PyDict_New();
- if (!dict)
- return NULL;
-
- num_fields = DBFGetFieldCount(handle);
- for (i = 0; i < num_fields; i++)
- {
- value = do_read_attribute(handle, record, i, name);
- if (!value)
- goto fail;
-
- PyDict_SetItemString(dict, name, value);
- Py_DECREF(value);
- }
-
- return dict;
-
- fail:
- Py_XDECREF(dict);
- return NULL;
-}
-
-/* the write_record method. Write the record record given wither as a
- * dictionary or a sequence (i.e. a list or a tuple).
- *
- * If it's a dictionary the keys must be the names of the fields and
- * their value must have a suitable type. Only the fields actually
- * contained in the dictionary are written. Fields for which there's no
- * item in the dict are not modified.
- *
- * If it's a sequence, all fields must be present in the right order.
- *
- * In case of error, set a python exception and return NULL. Since that
- * value will be returned to the python interpreter as is, the
- * interpreter should recognize the exception.
- *
- * The method is implemented with two c-functions, write_field to write
- * a single field and DBFInfo_write_record as the front-end.
- */
-
-
-/* write a single field of a record. */
-static int
-write_field(DBFHandle handle, int record, int field, int type,
- PyObject * value)
-{
- char * string_value;
- int int_value;
- double double_value;
-
- if (value == Py_None)
- {
- if (!DBFWriteNULLAttribute(handle, record, field))
- {
- PyErr_Format(PyExc_IOError,
- "can't write NULL field %d of record %d",
- field, record);
- return 0;
- }
- }
- else
- {
- switch (type)
- {
- case FTString:
- string_value = PyString_AsString(value);
- if (!string_value)
- return 0;
- if (!DBFWriteStringAttribute(handle, record, field, string_value))
- {
- PyErr_Format(PyExc_IOError,
- "can't write field %d of record %d",
- field, record);
- return 0;
- }
- break;
-
- case FTInteger:
- int_value = PyInt_AsLong(value);
- if (int_value == -1 && PyErr_Occurred())
- return 0;
- if (!DBFWriteIntegerAttribute(handle, record, field, int_value))
- {
- PyErr_Format(PyExc_IOError,
- "can't write field %d of record %d",
- field, record);
- return 0;
- }
- break;
-
- case FTDouble:
- double_value = PyFloat_AsDouble(value);
- if (double_value == -1 && PyErr_Occurred())
- return 0;
- if (!DBFWriteDoubleAttribute(handle, record, field, double_value))
- {
- PyErr_Format(PyExc_IOError,
- "can't write field %d of record %d",
- field, record);
- return 0;
- }
- break;
-
- default:
- PyErr_Format(PyExc_TypeError, "Invalid field data type %d", type);
- return 0;
- }
- }
-
- return 1;
-}
-
-static
-PyObject *
-DBFInfo_write_record(DBFHandle handle, int record, PyObject *record_object)
-{
- int num_fields;
- int i, length;
- int type, width;
- char name[12];
- PyObject * value = NULL;
-
- num_fields = DBFGetFieldCount(handle);
-
- /* We used to use PyMapping_Check to test whether record_object is a
- * dictionary like object instead of PySequence_Check to test
- * whether it's a sequence. Unfortunately in Python 2.3
- * PyMapping_Check returns true for lists and tuples too so the old
- * approach doesn't work anymore.
- */
- if (PySequence_Check(record_object))
- {
- /* It's a sequence object. Iterate through all items in the
- * sequence and write them to the appropriate field.
- */
- length = PySequence_Length(record_object);
- if (length != num_fields)
- {
- PyErr_SetString(PyExc_TypeError,
- "record must have one item for each field");
- goto fail;
- }
- for (i = 0; i < length; i++)
- {
- type = DBFGetFieldInfo(handle, i, name, &width, NULL);
- value = PySequence_GetItem(record_object, i);
- if (value)
- {
- if (!write_field(handle, record, i, type, value))
- goto fail;
- Py_DECREF(value);
- }
- else
- {
- goto fail;
- }
- }
- }
- else
- {
- /* It's a dictionary-like object. Iterate over the names of the
- * known fields and write the corresponding item
- */
- for (i = 0; i < num_fields; i++)
- {
- type = DBFGetFieldInfo(handle, i, name, &width, NULL);
-
- /* if the dictionary has the key name write that object to
- * the appropriate field, other wise just clear the python
- * exception and do nothing.
- */
- value = PyMapping_GetItemString(record_object, name);
- if (value)
- {
- if (!write_field(handle, record, i, type, value))
- goto fail;
- Py_DECREF(value);
- }
- else
- {
- PyErr_Clear();
- }
- }
- }
-
- Py_INCREF(Py_None);
- return Py_None;
-
- fail:
- Py_XDECREF(value);
- return NULL;
-}
-%}
-
-
-/* The commit method implementation
- *
- * The method relies on the DBFUpdateHeader method which is not
- * available in shapelib <= 1.2.10. setup.py defines
- * HAVE_UPDATE_HEADER's value depending on whether the function is
- * available in the shapelib version the code is compiled with.
- */
-%{
-static
-void
-DBFInfo_commit(DBFHandle handle)
-{
-#if HAVE_UPDATE_HEADER
- DBFUpdateHeader(handle);
-#endif
-}
-%}
-
-
-/*
- * The SWIG Interface definition.
- */
-
-/* include some common SWIG type definitions and standard exception
- handling code */
-%include typemaps.i
-%include exception.i
-
-/* As for ShapeFile in shapelib.i, We define a new C-struct that holds
- * the DBFHandle. This is mainly done so we can separate the close()
- * method from the destructor but it also helps with exception handling.
- *
- * After the DBFFile has been opened or created the handle is not NULL.
- * The close() method closes the file and sets handle to NULL as an
- * indicator that the file has been closed.
- */
-
-%{
- typedef struct {
- DBFHandle handle;
- } DBFFile;
-%}
-
-
-/* The first argument to the DBFFile methods is a DBFFile pointer.
- * We have to check whether handle is not NULL in most methods but not
- * all. In the destructor and the close method, it's OK for handle to be
- * NULL. We achieve this by checking whether the preprocessor macro
- * NOCHECK_$name is defined. SWIG replaces $name with the name of the
- * function for which the code is inserted. In the %{,%}-block below we
- * define the macros for the destructor and the close() method.
- */
-
-%typemap(python,check) DBFFile *{
-%#ifndef NOCHECK_$name
- if (!$target || !$target->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
-%#endif
-}
-
-%{
-#define NOCHECK_delete_DBFFile
-#define NOCHECK_DBFFile_close
-%}
-
-
-/* An exception handle for the constructor and the module level open()
- * and create() functions.
- *
- * Annoyingly, we *have* to put braces around the SWIG_exception()
- * calls, at least in the python case, because of the way the macro is
- * written. Of course, always putting braces around the branches of an
- * if-statement is often considered good practice.
- */
-%typemap(python,except) DBFFile * {
- $function;
- if (!$source)
- {
- SWIG_exception(SWIG_MemoryError, "no memory");
- }
- else if (!$source->handle)
- {
- SWIG_exception(SWIG_IOError, "$name failed");
- }
-}
-
-/* Exception handler for the add_field method */
-%typemap(python,except) int DBFFile_add_field {
- $function;
- if ($source < 0)
- {
- SWIG_exception(SWIG_RuntimeError, "add_field failed");
- }
-}
-
-/* define and use some typemaps for the field_info() method whose
- * C-implementation has three output parameters that are returned
- * through pointers passed into the function. SWIG already has
- * definitions for common types such as int* and we can use those for
- * the last two parameters:
- */
-
-%apply int * OUTPUT { int * output_width }
-%apply int * OUTPUT { int * output_decimals }
-
-/* the fieldname has to be defined manually: */
-%typemap(python,ignore) char *fieldname_out(char temp[12]) {
- $target = temp;
-}
-
-%typemap(python,argout) char *fieldname_out() {
- PyObject * string = PyString_FromString($source);
- $target = t_output_helper($target,string);
-}
-
-
-
-/*
- * The SWIG-version of the DBFFile struct
- */
-
-typedef struct
-{
- %addmethods {
- DBFFile(const char *file, const char * mode = "rb") {
- DBFFile * self = malloc(sizeof(DBFFile));
- if (self)
- self->handle = DBFOpen(file, mode);
- return self;
- }
-
- ~DBFFile() {
- if (self->handle)
- DBFClose(self->handle);
- free(self);
- }
-
- void close() {
- if (self->handle)
- DBFClose(self->handle);
- self->handle = NULL;
- }
-
- int field_count() {
- return DBFGetFieldCount(self->handle);
- }
-
- int record_count() {
- return DBFGetRecordCount(self->handle);
- }
-
- int field_info(int iField, char * fieldname_out,
- int * output_width, int * output_decimals) {
- return DBFGetFieldInfo(self->handle, iField, fieldname_out,
- output_width, output_decimals);
- }
-
- PyObject * read_record(int record) {
- return DBFInfo_read_record(self->handle, record);
- }
-
- PyObject * read_attribute(int record, int field) {
- return DBFInfo_read_attribute(self->handle, record, field);
- }
-
- int add_field(const char * pszFieldName, DBFFieldType eType,
- int nWidth, int nDecimals) {
- return DBFAddField(self->handle, pszFieldName, eType, nWidth,
- nDecimals);
- }
-
- PyObject *write_record(int record, PyObject *dict_or_sequence) {
- return DBFInfo_write_record(self->handle, record,
- dict_or_sequence);
- }
-
- void commit() {
- DBFInfo_commit(self->handle);
- }
- /* Delete the commit method from the class if it doesn't have a
- * real implementation.
- */
- %pragma(python) addtomethod="__class__:if not dbflibc._have_commit: del commit"
-
- /* The __del__ method generated by the old SWIG version we're
- * tries to access self.thisown which may not be set at all when
- * there was an exception during construction. Therefore we
- * override it with our own version.
- * FIXME: It would be better to upgrade to a newer SWIG version
- * or to get rid of SWIG entirely.
- */
- %pragma(python) addtoclass = "
- def __del__(self,dbflibc=dbflibc):
- if getattr(self, 'thisown', 0):
- dbflibc.delete_DBFFile(self)
- "
-
-
- }
-} DBFFile;
-
-
-/*
- * Two module level functions, open() and create() that correspond to
- * DBFOpen and DBFCreate respectively. open() is equivalent to the
- * DBFFile constructor.
- */
-
-
-%{
- DBFFile * open_DBFFile(const char * file, const char * mode)
- {
- DBFFile * self = malloc(sizeof(DBFFile));
- if (self)
- self->handle = DBFOpen(file, mode);
- return self;
- }
-%}
-
-%name(open) %new DBFFile * open_DBFFile(const char * file,
- const char * mode = "rb");
-
-%{
- DBFFile * create_DBFFile(const char * file)
- {
- DBFFile * self = malloc(sizeof(DBFFile));
- if (self)
- self->handle = DBFCreate(file);
- return self;
- }
-%}
-%name(create) %new DBFFile * create_DBFFile(const char * file);
-
-
-
-/* constant definitions copied from shapefil.h */
-typedef enum {
- FTString,
- FTInteger,
- FTDouble,
- FTInvalid
-} DBFFieldType;
-
-
-/* Put the value of the HAVE_UPDATE_HEADER preprocessor macro into the
- * wrapper so that the __class__ pragma above knows when to remove the
- * commit method
- */
-const int _have_commit = HAVE_UPDATE_HEADER;
-
Deleted: trunk/thuban/libraries/pyshapelib/dbflib.py
===================================================================
--- trunk/thuban/libraries/pyshapelib/dbflib.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/dbflib.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,76 +0,0 @@
-# This file was created automatically by SWIG.
-import dbflibc
-class DBFFile:
- def __init__(self,*args):
- self.this = apply(dbflibc.new_DBFFile,args)
- self.thisown = 1
-
- def __del__(self,dbflibc=dbflibc):
- if self.thisown == 1 :
- dbflibc.delete_DBFFile(self)
- def close(*args):
- val = apply(dbflibc.DBFFile_close,args)
- return val
- def field_count(*args):
- val = apply(dbflibc.DBFFile_field_count,args)
- return val
- def record_count(*args):
- val = apply(dbflibc.DBFFile_record_count,args)
- return val
- def field_info(*args):
- val = apply(dbflibc.DBFFile_field_info,args)
- return val
- def read_record(*args):
- val = apply(dbflibc.DBFFile_read_record,args)
- return val
- def read_attribute(*args):
- val = apply(dbflibc.DBFFile_read_attribute,args)
- return val
- def add_field(*args):
- val = apply(dbflibc.DBFFile_add_field,args)
- return val
- def write_record(*args):
- val = apply(dbflibc.DBFFile_write_record,args)
- return val
- def commit(*args):
- val = apply(dbflibc.DBFFile_commit,args)
- return val
- def __repr__(self):
- return "<C DBFFile instance at %s>" % (self.this,)
- if not dbflibc._have_commit: del commit
-
- def __del__(self,dbflibc=dbflibc):
- if getattr(self, 'thisown', 0):
- dbflibc.delete_DBFFile(self)
-
-class DBFFilePtr(DBFFile):
- def __init__(self,this):
- self.this = this
- self.thisown = 0
- self.__class__ = DBFFile
-
-
-
-
-
-#-------------- FUNCTION WRAPPERS ------------------
-
-def open(*args, **kwargs):
- val = apply(dbflibc.open,args,kwargs)
- if val: val = DBFFilePtr(val); val.thisown = 1
- return val
-
-def create(*args, **kwargs):
- val = apply(dbflibc.create,args,kwargs)
- if val: val = DBFFilePtr(val); val.thisown = 1
- return val
-
-
-
-#-------------- VARIABLE WRAPPERS ------------------
-
-FTString = dbflibc.FTString
-FTInteger = dbflibc.FTInteger
-FTDouble = dbflibc.FTDouble
-FTInvalid = dbflibc.FTInvalid
-_have_commit = dbflibc._have_commit
Deleted: trunk/thuban/libraries/pyshapelib/dbflib_wrap.c
===================================================================
--- trunk/thuban/libraries/pyshapelib/dbflib_wrap.c 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/dbflib_wrap.c 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,1431 +0,0 @@
-/* ----------------------------------------------------------------------------
- * This file was automatically generated by SWIG (http://www.swig.org).
- * Version 1.3u-20020503-1857 (Alpha 5)
- * And later MANUALLY edited at the end.
- *
- * This file is not intended to be easily readable and contains a number of
- * coding conventions designed to improve portability and efficiency. Do not make
- * changes to this file unless you know what you are doing--modify the SWIG
- * interface file instead.
- * ----------------------------------------------------------------------------- */
-
-#define SWIGPYTHON
-/***********************************************************************
- * common.swg
- *
- * This file contains generic SWIG runtime support for pointer
- * type checking as well as a few commonly used macros to control
- * external linkage.
- *
- * Author : David Beazley (beazley at cs.uchicago.edu)
- *
- * Copyright (c) 1999-2000, The University of Chicago
- *
- * This file may be freely redistributed without license or fee provided
- * this copyright message remains intact.
- ************************************************************************/
-
-#include <string.h>
-
-#if defined(_WIN32) || defined(__WIN32__)
-# if defined(_MSC_VER)
-# if defined(STATIC_LINKED)
-# define SWIGEXPORT(a) a
-# else
-# define SWIGEXPORT(a) __declspec(dllexport) a
-# endif
-# else
-# if defined(__BORLANDC__)
-# define SWIGEXPORT(a) a _export
-# else
-# define SWIGEXPORT(a) a
-# endif
-#endif
-#else
-# define SWIGEXPORT(a) a
-#endif
-
-#ifdef SWIG_GLOBAL
-#define SWIGRUNTIME(a) SWIGEXPORT(a)
-#else
-#define SWIGRUNTIME(a) static a
-#endif
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-typedef struct swig_type_info {
- char *name;
- void *(*converter)(void *);
- char *str;
- struct swig_type_info *next;
- struct swig_type_info *prev;
-} swig_type_info;
-
-#ifdef SWIG_NOINCLUDE
-SWIGEXPORT(swig_type_info *) SWIG_TypeRegister(swig_type_info *);
-SWIGEXPORT(swig_type_info *) SWIG_TypeCheck(char *c, swig_type_info *);
-SWIGEXPORT(void *) SWIG_TypeCast(swig_type_info *, void *);
-#else
-
-static swig_type_info *swig_type_list = 0;
-
-/* Register a type mapping with the type-checking */
-SWIGRUNTIME(swig_type_info *)
-SWIG_TypeRegister(swig_type_info *ti)
-{
- swig_type_info *tc, *head, *ret, *next;
- /* Check to see if this type has already been registered */
- tc = swig_type_list;
- while (tc) {
- if (strcmp(tc->name, ti->name) == 0) {
- /* Already exists in the table. Just add additional types to the list */
- head = tc;
- next = tc->next;
- goto l1;
- }
- tc = tc->prev;
- }
- head = ti;
- next = 0;
-
- /* Place in list */
- ti->prev = swig_type_list;
- swig_type_list = ti;
-
- /* Build linked lists */
- l1:
- ret = head;
- tc = ti + 1;
- /* Patch up the rest of the links */
- while (tc->name) {
- head->next = tc;
- tc->prev = head;
- head = tc;
- tc++;
- }
- head->next = next;
- return ret;
-}
-
-/* Check the typename */
-SWIGRUNTIME(swig_type_info *)
-SWIG_TypeCheck(char *c, swig_type_info *ty)
-{
- swig_type_info *s;
- if (!ty) return 0; /* Void pointer */
- s = ty->next; /* First element always just a name */
- while (s) {
- if (strcmp(s->name,c) == 0) {
- if (s == ty->next) return s;
- /* Move s to the top of the linked list */
- s->prev->next = s->next;
- if (s->next) {
- s->next->prev = s->prev;
- }
- /* Insert s as second element in the list */
- s->next = ty->next;
- if (ty->next) ty->next->prev = s;
- ty->next = s;
- return s;
- }
- s = s->next;
- }
- return 0;
-}
-
-/* Cast a pointer (needed for C++ inheritance */
-SWIGRUNTIME(void *)
-SWIG_TypeCast(swig_type_info *ty, void *ptr)
-{
- if ((!ty) || (!ty->converter)) return ptr;
- return (*ty->converter)(ptr);
-}
-
-/* Search for a swig_type_info structure */
-SWIGRUNTIME(void *)
-SWIG_TypeQuery(const char *name) {
- swig_type_info *ty = swig_type_list;
- while (ty) {
- if (ty->str && (strcmp(name,ty->str) == 0)) return ty;
- if (ty->name && (strcmp(name,ty->name) == 0)) return ty;
- ty = ty->prev;
- }
- return 0;
-}
-
-#endif
-
-#ifdef __cplusplus
-}
-#endif
-
-
-
-/***********************************************************************
- * python.swg
- *
- * This file contains the runtime support for Python modules
- * and includes code for managing global variables and pointer
- * type checking.
- *
- * Author : David Beazley (beazley at cs.uchicago.edu)
- ************************************************************************/
-
-#include <stdlib.h>
-#include "Python.h"
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#define SWIG_PY_INT 1
-#define SWIG_PY_FLOAT 2
-#define SWIG_PY_STRING 3
-#define SWIG_PY_POINTER 4
-
-/* Constant information structure */
-typedef struct swig_const_info {
- int type;
- char *name;
- long lvalue;
- double dvalue;
- void *pvalue;
- swig_type_info **ptype;
-} swig_const_info;
-
-#ifdef SWIG_NOINCLUDE
-
-SWIGEXPORT(PyObject *) SWIG_newvarlink();
-SWIGEXPORT(void) SWIG_addvarlink(PyObject *, char *, PyObject *(*)(void), int (*)(PyObject *));
-SWIGEXPORT(int) SWIG_ConvertPtr(PyObject *, void **, swig_type_info *, int);
-SWIGEXPORT(void) SWIG_MakePtr(char *c, void *, swig_type_info *);
-SWIGEXPORT(PyObject *) SWIG_NewPointerObj(void *, swig_type_info *);
-SWIGEXPORT(void) SWIG_InstallConstants(PyObject *d, swig_const_info constants[]);
-
-#else
-
-/* -----------------------------------------------------------------------------
- * global variable support code.
- * ----------------------------------------------------------------------------- */
-
-typedef struct swig_globalvar {
- char *name; /* Name of global variable */
- PyObject *(*get_attr)(void); /* Return the current value */
- int (*set_attr)(PyObject *); /* Set the value */
- struct swig_globalvar *next;
-} swig_globalvar;
-
-typedef struct swig_varlinkobject {
- PyObject_HEAD
- swig_globalvar *vars;
-} swig_varlinkobject;
-
-static PyObject *
-swig_varlink_repr(swig_varlinkobject *v) {
- v = v;
- return PyString_FromString("<Global variables>");
-}
-
-static int
-swig_varlink_print(swig_varlinkobject *v, FILE *fp, int flags) {
- swig_globalvar *var;
- flags = flags;
- fprintf(fp,"Global variables { ");
- for (var = v->vars; var; var=var->next) {
- fprintf(fp,"%s", var->name);
- if (var->next) fprintf(fp,", ");
- }
- fprintf(fp," }\n");
- return 0;
-}
-
-static PyObject *
-swig_varlink_getattr(swig_varlinkobject *v, char *n) {
- swig_globalvar *var = v->vars;
- while (var) {
- if (strcmp(var->name,n) == 0) {
- return (*var->get_attr)();
- }
- var = var->next;
- }
- PyErr_SetString(PyExc_NameError,"Unknown C global variable");
- return NULL;
-}
-
-static int
-swig_varlink_setattr(swig_varlinkobject *v, char *n, PyObject *p) {
- swig_globalvar *var = v->vars;
- while (var) {
- if (strcmp(var->name,n) == 0) {
- return (*var->set_attr)(p);
- }
- var = var->next;
- }
- PyErr_SetString(PyExc_NameError,"Unknown C global variable");
- return 1;
-}
-
-statichere PyTypeObject varlinktype = {
- PyObject_HEAD_INIT(0)
- 0,
- "swigvarlink", /* Type name */
- sizeof(swig_varlinkobject), /* Basic size */
- 0, /* Itemsize */
- 0, /* Deallocator */
- (printfunc) swig_varlink_print, /* Print */
- (getattrfunc) swig_varlink_getattr, /* get attr */
- (setattrfunc) swig_varlink_setattr, /* Set attr */
- 0, /* tp_compare */
- (reprfunc) swig_varlink_repr, /* tp_repr */
- 0, /* tp_as_number */
- 0, /* tp_as_mapping*/
- 0, /* tp_hash */
-};
-
-/* Create a variable linking object for use later */
-SWIGRUNTIME(PyObject *)
-SWIG_newvarlink(void) {
- swig_varlinkobject *result = 0;
- result = PyMem_NEW(swig_varlinkobject,1);
- varlinktype.ob_type = &PyType_Type; /* Patch varlinktype into a PyType */
- result->ob_type = &varlinktype;
- result->vars = 0;
- result->ob_refcnt = 0;
- Py_XINCREF((PyObject *) result);
- return ((PyObject*) result);
-}
-
-SWIGRUNTIME(void)
-SWIG_addvarlink(PyObject *p, char *name,
- PyObject *(*get_attr)(void), int (*set_attr)(PyObject *p)) {
- swig_varlinkobject *v;
- swig_globalvar *gv;
- v= (swig_varlinkobject *) p;
- gv = (swig_globalvar *) malloc(sizeof(swig_globalvar));
- gv->name = (char *) malloc(strlen(name)+1);
- strcpy(gv->name,name);
- gv->get_attr = get_attr;
- gv->set_attr = set_attr;
- gv->next = v->vars;
- v->vars = gv;
-}
-/* Convert a pointer value */
-SWIGRUNTIME(int)
-SWIG_ConvertPtr(PyObject *obj, void **ptr, swig_type_info *ty, int flags) {
- unsigned long p;
- register int d;
- swig_type_info *tc;
- char *c;
- static PyObject *SWIG_this = 0;
- int newref = 0;
-
- if (!obj || (obj == Py_None)) {
- *ptr = 0;
- return 0;
- }
-#ifdef SWIG_COBJECT_TYPES
- if (!(PyCObject_Check(obj))) {
- if (!SWIG_this)
- SWIG_this = PyString_InternFromString("this");
- obj = PyObject_GetAttr(obj,SWIG_this);
- newref = 1;
- if (!obj) goto type_error;
- if (!PyCObject_Check(obj)) {
- Py_DECREF(obj);
- goto type_error;
- }
- }
- *ptr = PyCObject_AsVoidPtr(obj);
- c = (char *) PyCObject_GetDesc(obj);
- if (newref) Py_DECREF(obj);
- goto cobject;
-#else
- if (!(PyString_Check(obj))) {
- if (!SWIG_this)
- SWIG_this = PyString_InternFromString("this");
- obj = PyObject_GetAttr(obj,SWIG_this);
- newref = 1;
- if (!obj) goto type_error;
- if (!PyString_Check(obj)) {
- Py_DECREF(obj);
- goto type_error;
- }
- }
- c = PyString_AsString(obj);
- p = 0;
- /* Pointer values must start with leading underscore */
- if (*c != '_') {
- *ptr = (void *) 0;
- if (strcmp(c,"NULL") == 0) {
- if (newref) Py_DECREF(obj);
- return 0;
- } else {
- if (newref) Py_DECREF(obj);
- goto type_error;
- }
- }
- c++;
- /* Extract hex value from pointer */
- while ((d = *c)) {
- if ((d >= '0') && (d <= '9'))
- p = (p << 4) + (d - '0');
- else if ((d >= 'a') && (d <= 'f'))
- p = (p << 4) + (d - ('a'-10));
- else
- break;
- c++;
- }
- *ptr = (void *) p;
- if (newref) Py_DECREF(obj);
-#endif
-
-#ifdef SWIG_COBJECT_TYPES
-cobject:
-#endif
-
- if (ty) {
- tc = SWIG_TypeCheck(c,ty);
- if (!tc) goto type_error;
- *ptr = SWIG_TypeCast(tc,(void*)p);
- }
- return 0;
-
-type_error:
-
- if (flags) {
- if (ty) {
- char *temp = (char *) malloc(64+strlen(ty->name));
- sprintf(temp,"Type error. Expected %s", ty->name);
- PyErr_SetString(PyExc_TypeError, temp);
- free((char *) temp);
- } else {
- PyErr_SetString(PyExc_TypeError,"Expected a pointer");
- }
- }
- return -1;
-}
-
-/* Take a pointer and convert it to a string */
-SWIGRUNTIME(void)
-SWIG_MakePtr(char *c, void *ptr, swig_type_info *ty) {
- static char hex[17] = "0123456789abcdef";
- unsigned long p, s;
- char result[32], *r;
- r = result;
- p = (unsigned long) ptr;
- if (p > 0) {
- while (p > 0) {
- s = p & 0xf;
- *(r++) = hex[s];
- p = p >> 4;
- }
- *r = '_';
- while (r >= result)
- *(c++) = *(r--);
- strcpy (c, ty->name);
- } else {
- strcpy (c, "NULL");
- }
-}
-
-/* Create a new pointer object */
-SWIGRUNTIME(PyObject *)
-SWIG_NewPointerObj(void *ptr, swig_type_info *type) {
- char result[512];
- PyObject *robj;
- if (!ptr) {
- Py_INCREF(Py_None);
- return Py_None;
- }
-#ifdef SWIG_COBJECT_TYPES
- robj = PyCObject_FromVoidPtrAndDesc((void *) ptr, type->name, NULL);
-#else
- SWIG_MakePtr(result,ptr,type);
- robj = PyString_FromString(result);
-#endif
- return robj;
-}
-
-/* Install Constants */
-SWIGRUNTIME(void)
-SWIG_InstallConstants(PyObject *d, swig_const_info constants[]) {
- int i;
- PyObject *obj;
- for (i = 0; constants[i].type; i++) {
- switch(constants[i].type) {
- case SWIG_PY_INT:
- obj = PyInt_FromLong(constants[i].lvalue);
- break;
- case SWIG_PY_FLOAT:
- obj = PyFloat_FromDouble(constants[i].dvalue);
- break;
- case SWIG_PY_STRING:
- obj = PyString_FromString((char *) constants[i].pvalue);
- break;
- case SWIG_PY_POINTER:
- obj = SWIG_NewPointerObj(constants[i].pvalue, *(constants[i]).ptype);
- break;
- default:
- obj = 0;
- break;
- }
- if (obj) {
- PyDict_SetItemString(d,constants[i].name,obj);
- Py_DECREF(obj);
- }
- }
-}
-
-#endif
-
-#ifdef __cplusplus
-}
-#endif
-
-
-
-/* -------- TYPES TABLE (BEGIN) -------- */
-
-#define SWIGTYPE_p_DBFFile swig_types[0]
-static swig_type_info *swig_types[2];
-
-/* -------- TYPES TABLE (END) -------- */
-
-
-/*-----------------------------------------------
- @(target):= dbflibc.so
- ------------------------------------------------*/
-#define SWIG_init initdbflibc
-
-#define SWIG_name "dbflibc"
-
-#include "shapefil.h"
-
-
-/* Read one attribute from the dbf handle and return it as a new python object
- *
- * If an error occurs, set the appropriate Python exception and return
- * NULL.
- *
- * Assume that the values of the record and field arguments are valid.
- * The name argument will be passed to DBFGetFieldInfo as is and should
- * thus be either NULL or a pointer to an array of at least 12 chars
- */
-static PyObject *
-do_read_attribute(DBFInfo * handle, int record, int field, char * name)
-{
- int type, width;
- PyObject *value;
-
- type = DBFGetFieldInfo(handle, field, name, &width, NULL);
- /* For strings NULL and the empty string are indistinguishable
- * in DBF files. We prefer empty strings instead for backwards
- * compatibility reasons because older wrapper versions returned
- * emtpy strings as empty strings.
- */
- if (type != FTString && DBFIsAttributeNULL(handle, record, field))
- {
- value = Py_None;
- Py_INCREF(value);
- }
- else
- {
- switch (type)
- {
- case FTString:
- {
- const char * temp = DBFReadStringAttribute(handle, record, field);
- if (temp)
- {
- value = PyString_FromString(temp);
- }
- else
- {
- PyErr_Format(PyExc_IOError,
- "Can't read value for row %d column %d",
- record, field);
- value = NULL;
- }
- break;
- }
- case FTInteger:
- value = PyInt_FromLong(DBFReadIntegerAttribute(handle, record,
- field));
- break;
- case FTDouble:
- value = PyFloat_FromDouble(DBFReadDoubleAttribute(handle, record,
- field));
- break;
- default:
- PyErr_Format(PyExc_TypeError, "Invalid field data type %d",
- type);
- value = NULL;
- }
- }
- if (!value)
- return NULL;
-
- return value;
-}
-
-/* the read_attribute method. Return the value of the given record and
- * field as a python object of the appropriate type.
- *
- * In case of error, set a python exception and return NULL. Since that
- * value will be returned to the python interpreter as is, the
- * interpreter should recognize the exception.
- */
-
-static PyObject *
-DBFInfo_read_attribute(DBFInfo * handle, int record, int field)
-{
- if (record < 0 || record >= DBFGetRecordCount(handle))
- {
- PyErr_Format(PyExc_ValueError,
- "record index %d out of bounds (record count: %d)",
- record, DBFGetRecordCount(handle));
- return NULL;
- }
-
- if (field < 0 || field >= DBFGetFieldCount(handle))
- {
- PyErr_Format(PyExc_ValueError,
- "field index %d out of bounds (field count: %d)",
- field, DBFGetFieldCount(handle));
- return NULL;
- }
-
- return do_read_attribute(handle, record, field, NULL);
-}
-
-
-/* the read_record method. Return the record record as a dictionary with
- * whose keys are the names of the fields, and their values as the
- * appropriate Python type.
- *
- * In case of error, set a python exception and return NULL. Since that
- * value will be returned to the python interpreter as is, the
- * interpreter should recognize the exception.
- */
-
-static PyObject *
-DBFInfo_read_record(DBFInfo * handle, int record)
-{
- int num_fields;
- int i;
- int type, width;
- char name[12];
- PyObject *dict;
- PyObject *value;
-
- if (record < 0 || record >= DBFGetRecordCount(handle))
- {
- PyErr_Format(PyExc_ValueError,
- "record index %d out of bounds (record count: %d)",
- record, DBFGetRecordCount(handle));
- return NULL;
- }
-
- dict = PyDict_New();
- if (!dict)
- return NULL;
-
- num_fields = DBFGetFieldCount(handle);
- for (i = 0; i < num_fields; i++)
- {
- value = do_read_attribute(handle, record, i, name);
- if (!value)
- goto fail;
-
- PyDict_SetItemString(dict, name, value);
- Py_DECREF(value);
- }
-
- return dict;
-
- fail:
- Py_XDECREF(dict);
- return NULL;
-}
-
-/* the write_record method. Write the record record given wither as a
- * dictionary or a sequence (i.e. a list or a tuple).
- *
- * If it's a dictionary the keys must be the names of the fields and
- * their value must have a suitable type. Only the fields actually
- * contained in the dictionary are written. Fields for which there's no
- * item in the dict are not modified.
- *
- * If it's a sequence, all fields must be present in the right order.
- *
- * In case of error, set a python exception and return NULL. Since that
- * value will be returned to the python interpreter as is, the
- * interpreter should recognize the exception.
- *
- * The method is implemented with two c-functions, write_field to write
- * a single field and DBFInfo_write_record as the front-end.
- */
-
-
-/* write a single field of a record. */
-static int
-write_field(DBFHandle handle, int record, int field, int type,
- PyObject * value)
-{
- char * string_value;
- int int_value;
- double double_value;
-
- if (value == Py_None)
- {
- if (!DBFWriteNULLAttribute(handle, record, field))
- {
- PyErr_Format(PyExc_IOError,
- "can't write NULL field %d of record %d",
- field, record);
- return 0;
- }
- }
- else
- {
- switch (type)
- {
- case FTString:
- string_value = PyString_AsString(value);
- if (!string_value)
- return 0;
- if (!DBFWriteStringAttribute(handle, record, field, string_value))
- {
- PyErr_Format(PyExc_IOError,
- "can't write field %d of record %d",
- field, record);
- return 0;
- }
- break;
-
- case FTInteger:
- int_value = PyInt_AsLong(value);
- if (int_value == -1 && PyErr_Occurred())
- return 0;
- if (!DBFWriteIntegerAttribute(handle, record, field, int_value))
- {
- PyErr_Format(PyExc_IOError,
- "can't write field %d of record %d",
- field, record);
- return 0;
- }
- break;
-
- case FTDouble:
- double_value = PyFloat_AsDouble(value);
- if (double_value == -1 && PyErr_Occurred())
- return 0;
- if (!DBFWriteDoubleAttribute(handle, record, field, double_value))
- {
- PyErr_Format(PyExc_IOError,
- "can't write field %d of record %d",
- field, record);
- return 0;
- }
- break;
-
- default:
- PyErr_Format(PyExc_TypeError, "Invalid field data type %d", type);
- return 0;
- }
- }
-
- return 1;
-}
-
-static
-PyObject *
-DBFInfo_write_record(DBFHandle handle, int record, PyObject *record_object)
-{
- int num_fields;
- int i, length;
- int type, width;
- char name[12];
- PyObject * value = NULL;
-
- num_fields = DBFGetFieldCount(handle);
-
- /* We used to use PyMapping_Check to test whether record_object is a
- * dictionary like object instead of PySequence_Check to test
- * whether it's a sequence. Unfortunately in Python 2.3
- * PyMapping_Check returns true for lists and tuples too so the old
- * approach doesn't work anymore.
- */
- if (PySequence_Check(record_object))
- {
- /* It's a sequence object. Iterate through all items in the
- * sequence and write them to the appropriate field.
- */
- length = PySequence_Length(record_object);
- if (length != num_fields)
- {
- PyErr_SetString(PyExc_TypeError,
- "record must have one item for each field");
- goto fail;
- }
- for (i = 0; i < length; i++)
- {
- type = DBFGetFieldInfo(handle, i, name, &width, NULL);
- value = PySequence_GetItem(record_object, i);
- if (value)
- {
- if (!write_field(handle, record, i, type, value))
- goto fail;
- Py_DECREF(value);
- }
- else
- {
- goto fail;
- }
- }
- }
- else
- {
- /* It's a dictionary-like object. Iterate over the names of the
- * known fields and write the corresponding item
- */
- for (i = 0; i < num_fields; i++)
- {
- type = DBFGetFieldInfo(handle, i, name, &width, NULL);
-
- /* if the dictionary has the key name write that object to
- * the appropriate field, other wise just clear the python
- * exception and do nothing.
- */
- value = PyMapping_GetItemString(record_object, name);
- if (value)
- {
- if (!write_field(handle, record, i, type, value))
- goto fail;
- Py_DECREF(value);
- }
- else
- {
- PyErr_Clear();
- }
- }
- }
-
- Py_INCREF(Py_None);
- return Py_None;
-
- fail:
- Py_XDECREF(value);
- return NULL;
-}
-
-static
-void
-DBFInfo_commit(DBFHandle handle)
-{
-#if HAVE_UPDATE_HEADER
- DBFUpdateHeader(handle);
-#endif
-}
-
-static PyObject* l_output_helper(PyObject* target, PyObject* o) {
- PyObject* o2;
- if (!target) {
- target = o;
- } else if (target == Py_None) {
- Py_DECREF(Py_None);
- target = o;
- } else {
- if (!PyList_Check(target)) {
- o2 = target;
- target = PyList_New(0);
- PyList_Append(target, o2);
- Py_XDECREF(o2);
- }
- PyList_Append(target,o);
- Py_XDECREF(o);
- }
- return target;
-}
-
-static PyObject* t_output_helper(PyObject* target, PyObject* o) {
- PyObject* o2;
- PyObject* o3;
-
- if (!target) {
- target = o;
- } else if (target == Py_None) {
- Py_DECREF(Py_None);
- target = o;
- } else {
- if (!PyTuple_Check(target)) {
- o2 = target;
- target = PyTuple_New(1);
- PyTuple_SetItem(target, 0, o2);
- }
- o3 = PyTuple_New(1);
- PyTuple_SetItem(o3, 0, o);
-
- o2 = target;
- target = PySequence_Concat(o2, o3);
- Py_DECREF(o2);
- Py_DECREF(o3);
- }
- return target;
-}
-
-#define SWIG_MemoryError 1
-#define SWIG_IOError 2
-#define SWIG_RuntimeError 3
-#define SWIG_IndexError 4
-#define SWIG_TypeError 5
-#define SWIG_DivisionByZero 6
-#define SWIG_OverflowError 7
-#define SWIG_SyntaxError 8
-#define SWIG_ValueError 9
-#define SWIG_SystemError 10
-#define SWIG_UnknownError 99
-
-static void _SWIG_exception(int code, char *msg) {
- switch(code) {
- case SWIG_MemoryError:
- PyErr_SetString(PyExc_MemoryError,msg);
- break;
- case SWIG_IOError:
- PyErr_SetString(PyExc_IOError,msg);
- break;
- case SWIG_RuntimeError:
- PyErr_SetString(PyExc_RuntimeError,msg);
- break;
- case SWIG_IndexError:
- PyErr_SetString(PyExc_IndexError,msg);
- break;
- case SWIG_TypeError:
- PyErr_SetString(PyExc_TypeError,msg);
- break;
- case SWIG_DivisionByZero:
- PyErr_SetString(PyExc_ZeroDivisionError,msg);
- break;
- case SWIG_OverflowError:
- PyErr_SetString(PyExc_OverflowError,msg);
- break;
- case SWIG_SyntaxError:
- PyErr_SetString(PyExc_SyntaxError,msg);
- break;
- case SWIG_ValueError:
- PyErr_SetString(PyExc_ValueError,msg);
- break;
- case SWIG_SystemError:
- PyErr_SetString(PyExc_SystemError,msg);
- break;
- default:
- PyErr_SetString(PyExc_RuntimeError,msg);
- break;
- }
-}
-
-#define SWIG_exception(a,b) { _SWIG_exception(a,b); return NULL; }
-
- typedef struct {
- DBFHandle handle;
- } DBFFile;
-
-#define NOCHECK_delete_DBFFile
-#define NOCHECK_DBFFile_close
-
- DBFFile * open_DBFFile(const char * file, const char * mode)
- {
- DBFFile * self = malloc(sizeof(DBFFile));
- if (self)
- self->handle = DBFOpen(file, mode);
- return self;
- }
-
- DBFFile * create_DBFFile(const char * file)
- {
- DBFFile * self = malloc(sizeof(DBFFile));
- if (self)
- self->handle = DBFCreate(file);
- return self;
- }
-#ifdef __cplusplus
-extern "C" {
-#endif
-static PyObject *_wrap_open(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- char *arg0 ;
- char *arg1 = "rb" ;
- DBFFile *result ;
-
- if(!PyArg_ParseTuple(args,"s|s:open",&arg0,&arg1)) return NULL;
- {
- result = (DBFFile *)open_DBFFile((char const *)arg0,(char const *)arg1);
- ;
- if (!result)
- {
- SWIG_exception(SWIG_MemoryError, "no memory");
- }
- else if (!result->handle)
- {
- SWIG_exception(SWIG_IOError, "open_DBFFile failed");
- }
- }resultobj = SWIG_NewPointerObj((void *) result, SWIGTYPE_p_DBFFile);
- return resultobj;
-}
-
-
-static PyObject *_wrap_create(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- char *arg0 ;
- DBFFile *result ;
-
- if(!PyArg_ParseTuple(args,"s:create",&arg0)) return NULL;
- {
- result = (DBFFile *)create_DBFFile((char const *)arg0);
- ;
- if (!result)
- {
- SWIG_exception(SWIG_MemoryError, "no memory");
- }
- else if (!result->handle)
- {
- SWIG_exception(SWIG_IOError, "create_DBFFile failed");
- }
- }resultobj = SWIG_NewPointerObj((void *) result, SWIGTYPE_p_DBFFile);
- return resultobj;
-}
-
-
-DBFFile * new_DBFFile(char const *file,char const *mode) {
- {
- DBFFile * self = malloc(sizeof(DBFFile));
- if (self)
- self->handle = DBFOpen(file, mode);
- return self;
- }
-}
-
-
-static PyObject *_wrap_new_DBFFile(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- char *arg0 ;
- char *arg1 = "rb" ;
- DBFFile *result ;
-
- if(!PyArg_ParseTuple(args,"s|s:new_DBFFile",&arg0,&arg1)) return NULL;
- {
- result = (DBFFile *)new_DBFFile((char const *)arg0,(char const *)arg1);
- ;
- if (!result)
- {
- SWIG_exception(SWIG_MemoryError, "no memory");
- }
- else if (!result->handle)
- {
- SWIG_exception(SWIG_IOError, "new_DBFFile failed");
- }
- }resultobj = SWIG_NewPointerObj((void *) result, SWIGTYPE_p_DBFFile);
- return resultobj;
-}
-
-
-void delete_DBFFile(DBFFile *self) {
- {
- if (self->handle)
- DBFClose(self->handle);
- free(self);
- }
-}
-
-
-static PyObject *_wrap_delete_DBFFile(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- PyObject * argo0 =0 ;
-
- if(!PyArg_ParseTuple(args,"O:delete_DBFFile",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_delete_DBFFile
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- delete_DBFFile(arg0);
- Py_INCREF(Py_None);
- resultobj = Py_None;
- return resultobj;
-}
-
-
-void DBFFile_close(DBFFile *self) {
- {
- if (self->handle)
- DBFClose(self->handle);
- self->handle = NULL;
- }
-}
-
-
-static PyObject *_wrap_DBFFile_close(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- PyObject * argo0 =0 ;
-
- if(!PyArg_ParseTuple(args,"O:DBFFile_close",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_DBFFile_close
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- DBFFile_close(arg0);
- Py_INCREF(Py_None);
- resultobj = Py_None;
- return resultobj;
-}
-
-
-int DBFFile_field_count(DBFFile *self) {
- {
- return DBFGetFieldCount(self->handle);
- }
-}
-
-
-static PyObject *_wrap_DBFFile_field_count(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- PyObject * argo0 =0 ;
- int result ;
-
- if(!PyArg_ParseTuple(args,"O:DBFFile_field_count",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_DBFFile_field_count
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- result = (int )DBFFile_field_count(arg0);
- resultobj = PyInt_FromLong((long)result);
- return resultobj;
-}
-
-
-int DBFFile_record_count(DBFFile *self) {
- {
- return DBFGetRecordCount(self->handle);
- }
-}
-
-
-static PyObject *_wrap_DBFFile_record_count(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- PyObject * argo0 =0 ;
- int result ;
-
- if(!PyArg_ParseTuple(args,"O:DBFFile_record_count",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_DBFFile_record_count
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- result = (int )DBFFile_record_count(arg0);
- resultobj = PyInt_FromLong((long)result);
- return resultobj;
-}
-
-
-int DBFFile_field_info(DBFFile *self,int iField,char *fieldname_out,int *output_width,int *output_decimals) {
- {
- return DBFGetFieldInfo(self->handle, iField, fieldname_out,
- output_width, output_decimals);
- }
-}
-
-
-static PyObject *_wrap_DBFFile_field_info(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- int arg1 ;
- char *arg2 ;
- int *arg3 ;
- int *arg4 ;
- char temp[12] ;
- int temp0 ;
- int temp1 ;
- PyObject * argo0 =0 ;
- int result ;
-
- {
- arg2 = temp;
- }
- {
- arg3 = &temp0;
- }
- {
- arg4 = &temp1;
- }
- if(!PyArg_ParseTuple(args,"Oi:DBFFile_field_info",&argo0,&arg1)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_DBFFile_field_info
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- result = (int )DBFFile_field_info(arg0,arg1,arg2,arg3,arg4);
- resultobj = PyInt_FromLong((long)result);
- {
- PyObject * string = PyString_FromString(arg2);
- resultobj = t_output_helper(resultobj,string);
- }
- {
- PyObject *o;
- o = PyInt_FromLong((long) (*arg3));
- resultobj = t_output_helper(resultobj, o);
- }
- {
- PyObject *o;
- o = PyInt_FromLong((long) (*arg4));
- resultobj = t_output_helper(resultobj, o);
- }
- return resultobj;
-}
-
-
-PyObject * DBFFile_read_record(DBFFile *self,int record) {
- {
- return DBFInfo_read_record(self->handle, record);
- }
-}
-
-
-static PyObject *_wrap_DBFFile_read_record(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- int arg1 ;
- PyObject * argo0 =0 ;
- PyObject *result ;
-
- if(!PyArg_ParseTuple(args,"Oi:DBFFile_read_record",&argo0,&arg1)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_DBFFile_read_record
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- result = (PyObject *)DBFFile_read_record(arg0,arg1);
- {
- resultobj = result;
- }
- return resultobj;
-}
-
-
-PyObject * DBFFile_read_attribute(DBFFile *self,int record,int field) {
- {
- return DBFInfo_read_attribute(self->handle, record, field);
- }
-}
-
-
-static PyObject *_wrap_DBFFile_read_attribute(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- int arg1 ;
- int arg2 ;
- PyObject * argo0 =0 ;
- PyObject *result ;
-
- if(!PyArg_ParseTuple(args,"Oii:DBFFile_read_attribute",&argo0,&arg1,&arg2)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_DBFFile_read_attribute
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- result = (PyObject *)DBFFile_read_attribute(arg0,arg1,arg2);
- {
- resultobj = result;
- }
- return resultobj;
-}
-
-
-int DBFFile_add_field(DBFFile *self,char const *pszFieldName,DBFFieldType eType,int nWidth,int nDecimals) {
- {
- return DBFAddField(self->handle, pszFieldName, eType, nWidth,
- nDecimals);
- }
-}
-
-
-static PyObject *_wrap_DBFFile_add_field(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- char *arg1 ;
- int arg2 ;
- int arg3 ;
- int arg4 ;
- PyObject * argo0 =0 ;
- int result ;
-
- if(!PyArg_ParseTuple(args,"Osiii:DBFFile_add_field",&argo0,&arg1,&arg2,&arg3,&arg4)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_DBFFile_add_field
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- {
- result = (int )DBFFile_add_field(arg0,(char const *)arg1,(DBFFieldType )arg2,arg3,arg4);
- ;
- if (result < 0)
- {
- SWIG_exception(SWIG_RuntimeError, "add_field failed");
- }
- }resultobj = PyInt_FromLong((long)result);
- return resultobj;
-}
-
-
-PyObject * DBFFile_write_record(DBFFile *self,int record,PyObject *dict_or_sequence) {
- {
- return DBFInfo_write_record(self->handle, record,
- dict_or_sequence);
- }
-}
-
-
-static PyObject *_wrap_DBFFile_write_record(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- int arg1 ;
- PyObject *arg2 ;
- PyObject * argo0 =0 ;
- PyObject * obj2 = 0 ;
- PyObject *result ;
-
- if(!PyArg_ParseTuple(args,"OiO:DBFFile_write_record",&argo0,&arg1,&obj2)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- arg2 = obj2;
- }
- {
- #ifndef NOCHECK_DBFFile_write_record
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- result = (PyObject *)DBFFile_write_record(arg0,arg1,arg2);
- {
- resultobj = result;
- }
- return resultobj;
-}
-
-
-void DBFFile_commit(DBFFile *self) {
- {
- DBFInfo_commit(self->handle);
- }
-}
-
-
-static PyObject *_wrap_DBFFile_commit(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- DBFFile *arg0 ;
- PyObject * argo0 =0 ;
-
- if(!PyArg_ParseTuple(args,"O:DBFFile_commit",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_DBFFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_DBFFile_commit
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "dbffile already closed");
- #endif
- }
- DBFFile_commit(arg0);
- Py_INCREF(Py_None);
- resultobj = Py_None;
- return resultobj;
-}
-
-
-static PyMethodDef dbflibcMethods[] = {
- { "open", _wrap_open, METH_VARARGS },
- { "create", _wrap_create, METH_VARARGS },
- { "new_DBFFile", _wrap_new_DBFFile, METH_VARARGS },
- { "delete_DBFFile", _wrap_delete_DBFFile, METH_VARARGS },
- { "DBFFile_close", _wrap_DBFFile_close, METH_VARARGS },
- { "DBFFile_field_count", _wrap_DBFFile_field_count, METH_VARARGS },
- { "DBFFile_record_count", _wrap_DBFFile_record_count, METH_VARARGS },
- { "DBFFile_field_info", _wrap_DBFFile_field_info, METH_VARARGS },
- { "DBFFile_read_record", _wrap_DBFFile_read_record, METH_VARARGS },
- { "DBFFile_read_attribute", _wrap_DBFFile_read_attribute, METH_VARARGS },
- { "DBFFile_add_field", _wrap_DBFFile_add_field, METH_VARARGS },
- { "DBFFile_write_record", _wrap_DBFFile_write_record, METH_VARARGS },
- { "DBFFile_commit", _wrap_DBFFile_commit, METH_VARARGS },
- { NULL, NULL }
-};
-
-#ifdef __cplusplus
-}
-#endif
-
-/* -------- TYPE CONVERSION AND EQUIVALENCE RULES (BEGIN) -------- */
-
-static swig_type_info _swigt__p_DBFFile[] = {{"_p_DBFFile", 0, "DBFFile *"},{"_p_DBFFile"},{0}};
-
-static swig_type_info *swig_types_initial[] = {
-_swigt__p_DBFFile,
-0
-};
-
-
-/* -------- TYPE CONVERSION AND EQUIVALENCE RULES (END) -------- */
-
-static swig_const_info swig_const_table[] = {
- { SWIG_PY_INT, "FTString", (long) FTString, 0, 0, 0},
- { SWIG_PY_INT, "FTInteger", (long) FTInteger, 0, 0, 0},
- { SWIG_PY_INT, "FTDouble", (long) FTDouble, 0, 0, 0},
- { SWIG_PY_INT, "FTInvalid", (long) FTInvalid, 0, 0, 0},
- { SWIG_PY_INT, "_have_commit", (long) HAVE_UPDATE_HEADER, 0, 0, 0},
-{0}};
-
-static PyObject *SWIG_globals;
-#ifdef __cplusplus
-extern "C"
-#endif
-SWIGEXPORT(void) initdbflibc(void) {
- PyObject *m, *d;
- int i;
- SWIG_globals = SWIG_newvarlink();
- m = Py_InitModule("dbflibc", dbflibcMethods);
- d = PyModule_GetDict(m);
- for (i = 0; swig_types_initial[i]; i++) {
- swig_types[i] = SWIG_TypeRegister(swig_types_initial[i]);
- }
- SWIG_InstallConstants(d,swig_const_table);
-
-# if PY_VERSION_HEX >=0x02040000
- /* because we are in a python module now, we can give out
- * pointers to python's locale agonistic function
- * XXX this clearly is a hack
- */
- DBFSetatof_function(&PyOS_ascii_atof);
-# endif
-
-}
-
Copied: trunk/thuban/libraries/pyshapelib/dbflibmodule.c (from rev 2888, branches/WIP-pyshapelib-Unicode/thuban/libraries/pyshapelib/dbflibmodule.c)
Modified: trunk/thuban/libraries/pyshapelib/pyshapelib_api.h
===================================================================
--- trunk/thuban/libraries/pyshapelib/pyshapelib_api.h 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/pyshapelib_api.h 2009-09-27 20:36:21 UTC (rev 2889)
@@ -22,7 +22,7 @@
* assign it to the variable given as argument */
#define PYSHAPELIB_IMPORT_API(apivariable) \
{ \
- PyObject * shapelib = PyImport_ImportModule("shapelibc"); \
+ PyObject * shapelib = PyImport_ImportModule("shapelib"); \
if (shapelib) \
{ \
PyObject * c_api_func = PyObject_GetAttrString(shapelib, "c_api"); \
Copied: trunk/thuban/libraries/pyshapelib/pyshapelib_common.h (from rev 2888, branches/WIP-pyshapelib-Unicode/thuban/libraries/pyshapelib/pyshapelib_common.h)
Modified: trunk/thuban/libraries/pyshapelib/pytest.py
===================================================================
--- trunk/thuban/libraries/pyshapelib/pytest.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/pytest.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,20 +1,57 @@
import shapelib, dbflib, shptree
+filename = "testfile"
+# filename = u"x\u03C0\u03C1\u03C2" # test a unicode filename
+
#
# The the shapefile module
#
+def test_shpobject(obj):
+ # The vertices method returns the shape as a list of lists of tuples.
+ print "vertices:", obj.vertices()
+
+ # The part_types method returns a tuple with the types of every part
+ print "part_types:", obj.part_types()
+
+ # The extents returns a tuple with two 4-element lists with the min.
+ # and max. values of the vertices.
+ print "extents:", obj.extents()
+
+ # The type attribute is the type code (one of the SHPT* constants
+ # defined in the shapelib module)
+ print "type:", obj.type
+
+ # The id attribute is the shape id
+ print "id:", obj.id
+
+ # the __repr__ method returns a string that can be eval()'ed to
+ # recreate the object. This __repr__ is also used by __str__
+ # and print
+ print "obj:", obj
+ print "reconstruction using __repr__:",
+ obj_repr = repr(obj)
+ obj_copy = eval(obj_repr)
+ if repr(obj_copy) == obj_repr:
+ print "ok"
+ else:
+ print "failed"
+
+
+
def make_shapefile(filename):
+ print "\n* Creating a ShapeFile"
+
# Create a shapefile with polygons
outfile = shapelib.create(filename, shapelib.SHPT_POLYGON)
# Create one very simple polygon and write it to the shapefile. The
# vertices should be given in clockwise order to comply with the
# shapefile specification.
+ print "\nA very simple polygon"
obj = shapelib.SHPObject(shapelib.SHPT_POLYGON, 1,
[[(10, 10), (10, 20), (20, 20), (10, 10)]])
- print obj.extents()
- print obj.vertices()
+ test_shpobject(obj)
outfile.write_object(-1, obj)
# Create a polygon with a hole. Note that according to the
@@ -26,18 +63,20 @@
# list of part types, one for each part of the shape. For polygons,
# the part type is always shapelib.SHPP_RING, though. The part
# types are only relevant for SHPT_MULTIPATCH shapefiles.
+ print "\nPolygon with a hole"
obj = shapelib.SHPObject(shapelib.SHPT_POLYGON, 1,
[[(0, 0), (0, 40), (40, 40), (40, 0), (0, 0)],
[(10, 10), (20, 10), (20, 20), (10, 20),(10, 10)],
])
- print obj.extents()
- print obj.vertices()
+ test_shpobject(obj)
outfile.write_object(-1, obj)
# close the file.
outfile.close()
def read_shapefile(filename):
+ print "\n* Reading a ShapeFile"
+
# open the shapefile
shp = shapelib.ShapeFile(filename)
@@ -46,30 +85,21 @@
# the SHPT* constants defined in the shapelib module) and min and
# max are 4-element lists with the min. and max. values of the
# vertices.
- print shp.info()
+ print "info:", shp.info()
- # read_object reads a shape
- obj = shp.read_object(0)
-
- # The vertices method returns the shape as a list of lists of tuples.
- print obj.vertices()[0][:10]
-
- # The extents returns a tuple with two 4-element lists with the min.
- # and max. values of the vertices.
- print obj.extents()
-
- # The type attribute is the type code (one of the SHPT* constants
- # defined in the shapelib module)
- print obj.type
-
- # The id attribute is the shape id
- print obj.id
-
# the cobject method returns a PyCObject containing the shapelib
# SHPHandle. This is useful for passing shapefile objects to
# C-Python extensions.
- print shp.cobject()
+ print "cobject:", shp.cobject()
+
+ n = shp.info()[0]
+ for i in range(n):
+ obj = shp.read_object(i)
+ print "\nread_object(%i):" % i
+ test_shpobject(obj)
+ print "\n* SHPTree:"
+
# build a quad tree from the shapefile. The first argument must be
# the return value of the shape file object's cobject method (this
# is currently needed to access the shape file at the C-level). The
@@ -83,27 +113,87 @@
print tree.find_shapes(minima[:2], maxima[:2])
-make_shapefile("testfile")
-read_shapefile("testfile")
+print "--- testing shapelib ---"
+make_shapefile(filename)
+read_shapefile(filename)
+
#
+# Test MultiPatch shapefiles
+#
+
+def make_multipatch(filename):
+ print "\n* Creating multipatch ShapeFile"
+
+ # Create a shapefile with multipatches
+ outfile = shapelib.create(filename, shapelib.SHPT_MULTIPATCH)
+
+ # Create a quad as a triangle strip and as a triangle fan, in ONE object!
+ # Multipatch shapefiles use XYZM vertices, but you can get away with
+ # only specifying X and Y, Z and M are zero by default.
+ print "\nA triangle strip"
+ obj = shapelib.SHPObject(shapelib.SHPT_MULTIPATCH, 0,
+ [[(0, 0), (0, 10), (10, 0), (10, 10)],
+ [(20, 20), (20, 30), (30, 30), (30, 20)]],
+ [shapelib.SHPP_TRISTRIP, shapelib.SHPP_TRIFAN])
+ test_shpobject(obj)
+ outfile.write_object(-1, obj)
+
+ # A polygon as an Outer ring and inner ring, with XYZ coordinates
+ # and measure values M. Here we will use the part types to specify
+ # their particular type.
+ #
+ # You can have more than one polygon in a single Object, as long
+ # as you obey the following sequence: each polygon starts with an
+ # outer ring, followed by its holes as inner rings.
+ #
+ # None is also accepted as M value to specify no-data. The ESRI
+ # Shapefile specs define any M value smaller than 1e-38 as no-data.
+ # shapelib will store no-data as a zero.
+ #
+ # If you don't need the M value, you can leave it out and use triples
+ # as vertices instead. For the first half of the inner ring,
+ # we used None to specify no-data. In the second half, we just
+ # omitted it.
+ #
+ print "\nA polygon as outer ring and inner ring with XYZM coordinates"
+ obj = shapelib.SHPObject(shapelib.SHPT_MULTIPATCH, 1,
+ [[(0, 0, 0, 35.3), (0, 40, 10, 15.4), (40, 40, 20, 9.5), (40, 0, 10, 24.6), (0, 0, 0, 31.8)],
+ [(10, 10, 5, None), (20, 10, 10, None), (20, 20, 15), (10, 20, 10, 20),(10, 10, 5)]],
+ [shapelib.SHPP_OUTERRING, shapelib.SHPP_INNERRING])
+ test_shpobject(obj)
+ outfile.write_object(-1, obj)
+
+ # close the file.
+ outfile.close()
+
+
+print "--- testing multipatch ---"
+
+make_multipatch("multipatch")
+read_shapefile("multipatch")
+
+#
# Test the DBF file module.
#
+print "\n\n--- testing dbflib ---"
+
def make_dbf(file):
# create a new dbf file and add three fields.
dbf = dbflib.create(file)
dbf.add_field("NAME", dbflib.FTString, 20, 0)
dbf.add_field("INT", dbflib.FTInteger, 10, 0)
dbf.add_field("FLOAT", dbflib.FTDouble, 10, 4)
+ dbf.add_field("BOOL", dbflib.FTLogical, 1, 0)
def add_dbf_records(file):
# add some records to file
dbf = dbflib.open(file, "r+b")
# Records can be added as a dictionary...
- dbf.write_record(0, {'NAME': "Weatherwax", "INT":1, "FLOAT":3.1415926535})
+ dbf.write_record(0, {'NAME': "Weatherwax", "INT":1, "FLOAT":3.1415926535, "BOOL":True})
# ... or as a sequence
- dbf.write_record(1, ("Ogg", 2, -1000.1234))
+ dbf.write_record(1, ("Ogg", 2, -1000.1234, False))
def list_dbf(file):
# print the contents of a dbf file to stdout
@@ -112,17 +202,19 @@
format = ""
for i in range(dbf.field_count()):
type, name, len, decc = dbf.field_info(i)
- if type == 0:
+ if type == dbflib.FTString:
format = format + " %%(%s)%ds" % (name, len)
- elif type == 1:
+ elif type == dbflib.FTInteger:
format = format + " %%(%s)%dd" % (name, len)
- elif type == 2:
+ elif type == dbflib.FTDouble:
format = format + " %%(%s)%dg" % (name, len)
+ elif type == dbflib.FTLogical:
+ format = format + " %%(%s)s" % name
print format
for i in range(dbf.record_count()):
print format % dbf.read_record(i)
-make_dbf("testfile")
-add_dbf_records("testfile")
-list_dbf("testfile")
+make_dbf(filename)
+add_dbf_records(filename)
+list_dbf(filename)
Modified: trunk/thuban/libraries/pyshapelib/setup.py
===================================================================
--- trunk/thuban/libraries/pyshapelib/setup.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/setup.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,72 +1,118 @@
import os
+import os.path
import sys
from distutils.core import setup, Extension
from distutils.util import convert_path
-# try to determine the directory where the shapelib source files are.
-# There are currently two supported situations.
-#
-# 1. "Standalone" build: the parent directory is the shapelib source
-# directory
-# 2. Built in the Thuban source tree where ../shapelib/ relative to the
-# directory containing this setup.py contains (the relevant parts of)
-# shapelib
-#
-# 3. Binary build with e.g. bdist_rpm. This takes place deep in the
-# build directory.
+def find_shapelib():
+ '''
+ try to determine the directory where the shapelib source files are.
+ There are currently two supported situations.
+
+ 1. "Standalone" build: the parent directory is the shapelib source
+ directory
+ 2. Built in the Thuban source tree where ../shapelib/ relative to the
+ directory containing this setup.py contains (the relevant parts of)
+ shapelib
+
+ 3. Binary build with e.g. bdist_rpm. This takes place deep in the
+ build directory.
+
+ os.path expects filenames in OS-specific form so we have to construct
+ the files with os.path functions. distutils, OTOH, uses posix-style
+ filenames exclusively, so we use posix conventions when making
+ filenames for distutils.
+ '''
+ for shp_dir in ["..", "../shapelib", "../../../../../../shapelib"]:
+ if (os.path.isdir(convert_path(shp_dir))
+ and os.path.exists(os.path.join(convert_path(shp_dir), "shpopen.c"))):
+ # shp_dir contains shpopen.c, so assume it's the directory with
+ # the shapefile library to use
+ return shp_dir
+ print >>sys.stderr, "no shapelib directory found"
+ sys.exit(1)
-# os.path expects filenames in OS-specific form so we have to construct
-# the files with os.path functions. distutils, OTOH, uses posix-style
-# filenames exclusively, so we use posix conventions when making
-# filenames for distutils.
-for shp_dir in ["..", "../shapelib", "../../../../../../shapelib"]:
- if (os.path.isdir(convert_path(shp_dir))
- and os.path.exists(os.path.join(convert_path(shp_dir), "shpopen.c"))):
- # shp_dir contains shpopen.c, so assume it's the directory with
- # the shapefile library to use
- break
-else:
- print >>sys.stderr, "no shapelib directory found"
- sys.exit(1)
+shp_dir = find_shapelib()
-def dbf_macros():
- """Return the macros to define when compiling the dbflib wrapper.
- The returned list specifies one macro, HAVE_UPDATE_HEADER, which is
- '1' if the dbflib version we will be compiling with has the
- DBFUpdateHeader function and '0' otherwise. To check whether
- DBFUpdateHeader is available, we scan shapefil.h for the string
- 'DBFUpdateHeader'.
- """
- f = open(convert_path(shp_dir + "/shapefil.h"))
- contents = f.read()
- f.close()
- if contents.find("DBFUpdateHeader") >= 0:
- return [("HAVE_UPDATE_HEADER", "1")]
- else:
- return [("HAVE_UPDATE_HEADER", "0")]
-extensions = [Extension("shapelibc",
- ["shapelib_wrap.c",
- shp_dir + "/shpopen.c",
- shp_dir + "/shptree.c"],
- include_dirs = [shp_dir]),
- Extension("shptree",
- ["shptreemodule.c"],
- include_dirs = [shp_dir]),
- Extension("dbflibc",
- ["dbflib_wrap.c",
- shp_dir + "/dbfopen.c"],
- include_dirs = [shp_dir],
- define_macros = dbf_macros())]
+def find_sahooks_files():
+ '''
+ Return a filelist of additional files implementing the SA hooks.
+ '''
+ candidates = [shp_dir + "/safileio.c"]
+ return filter(os.path.exists, candidates)
+sahooks_files = find_sahooks_files()
+
+
+
+def determine_macros():
+ '''
+ Return the macros to define when compiling the dbflib wrapper.
+
+ The returned list specifies following macros:
+ - HAVE_UPDATE_HEADER, which is
+ '1' if the dbflib version we will be compiling with has the
+ DBFUpdateHeader function and '0' otherwise. To check whether
+ DBFUpdateHeader is available, we scan shapefil.h for the string
+ 'DBFUpdateHeader'.
+ - HAVE_CODE_PAGE, which is '1' if the dbflib version we will
+ compiling with has the DBFGetCodePage function and '0' otherwise.
+ Again, shapefil.h is scanned to check this.
+ '''
+ f = open(convert_path(shp_dir + "/shapefil.h"))
+ contents = f.read()
+ f.close()
+
+ def have(keyword):
+ if keyword in contents:
+ return "1"
+ return "0"
+
+ return [
+ ("HAVE_UPDATE_HEADER", have("DBFUpdateHeader")),
+ ("HAVE_CODE_PAGE", have("DBFGetCodePage")),
+ ("HAVE_DELETE_FIELD", have("DBFDeleteField")),
+ ("DISABLE_CVSID", "1")]
+
+macros = determine_macros()
+
+
+
+def determine_cflags():
+ if "win32" in sys.platform:
+ # assume we're going to use MSVC.
+ return [
+ "/wd4996" # disable warning on "potential unsafe" strncpy
+ ]
+ return []
+
+
+cflags = determine_cflags()
+
+
+
+def make_extension(name, *sources):
+ return Extension(name, list(sources), include_dirs=[shp_dir], define_macros=macros, extra_compile_args=cflags)
+
+
+
+
+extensions = [
+ make_extension("shapelib", "shapelibmodule.c", shp_dir + "/shpopen.c", shp_dir + "/shptree.c", *sahooks_files),
+ make_extension("shptree", "shptreemodule.c"),
+ make_extension("dbflib", "dbflibmodule.c", shp_dir + "/dbfopen.c", *sahooks_files)
+ ]
+
+
+
setup(name = "pyshapelib",
- version = "0.3",
- description = "Python bindings for shapelib",
- author = "Bernhard Herzog",
- author_email = "bh at intevation.de",
- url = "ftp:intevation.de/users/bh",
- py_modules = ["shapelib", "dbflib"],
- ext_modules = extensions)
+ version = "0.4",
+ description = "Python bindings for shapelib",
+ author = "Bernhard Herzog, Bram de Greve",
+ author_email = "bh at intevation.de, bram.degreve at bramz.net",
+ url = "ftp:intevation.de/users/bh",
+ ext_modules = extensions)
Deleted: trunk/thuban/libraries/pyshapelib/shapelib.i
===================================================================
--- trunk/thuban/libraries/pyshapelib/shapelib.i 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/shapelib.i 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,612 +0,0 @@
-/* SWIG (www.swig.org) interface file for shapelib
- *
- * At the moment (Dec 2000) this file is only useful to generate Python
- * bindings. Invoke swig as follows:
- *
- * swig -python -shadow shapelib.i
- *
- * to generate shapelib_wrap.c and shapelib.py. shapelib_wrap.c
- * defines a bunch of Python-functions that wrap the appripriate
- * shapelib functions and shapelib.py contains an object oriented
- * wrapper around shapelib_wrap.c.
- *
- * Shapelib, and hence this module too, defines two types of objects,
- * shapes and shapefiles.
- */
-
-%module shapelib
-
-/*
- * First, a %{,%}-Block. These blocks are copied verbatim to the
- * shapelib_wrap.c file and are not parsed by SWIG. This is the place to
- * import headerfiles and define helper-functions that are needed by the
- * automatically generated wrappers.
- */
-
-%{
-
-/* import the shapelib headefile. */
-#include "shapefil.h"
-#include "pyshapelib_api.h"
-
-/*
- * Rename a few shapelib functions that are effectively methods with
- * preprocessor macros so that they have the names that swig expects
- * (e.g. the destructor of SHPObject has to be called delete_SHPObject)
- */
-
-#define delete_SHPObject SHPDestroyObject
-
-/*
- * The extents() method of SHPObject.
- *
- * Return the extents as a tuple of two 4-element lists with the min.
- * and max. values of x, y, z, m.
- */
-static PyObject *
-SHPObject_extents(SHPObject *object)
-{
- return Py_BuildValue("[dddd][dddd]",
- object->dfXMin, object->dfYMin, object->dfZMin,
- object->dfMMin,
- object->dfXMax, object->dfYMax, object->dfZMax,
- object->dfMMax);
-}
-
-
-/*
- * The vertices() method of SHPObject.
- *
- * Return the x and y coords of the vertices as a list of lists of
- * tuples.
- */
-
-static PyObject* build_vertex_list(SHPObject *object, int index, int length);
-
-static PyObject*
-SHPObject_vertices(SHPObject *object)
-{
- PyObject *result = NULL;
- PyObject *part = NULL;
- int part_idx, vertex_idx;
- int length = 0;
-
-
- if (object->nParts > 0)
- {
- /* A multipart shape. Usual for SHPT_ARC and SHPT_POLYGON */
-
- result = PyList_New(object->nParts);
- if (!result)
- return NULL;
-
- for (part_idx = 0, vertex_idx = 0; part_idx < object->nParts;
- part_idx++)
- {
- if (part_idx < object->nParts - 1)
- length = (object->panPartStart[part_idx + 1]
- - object->panPartStart[part_idx]);
- else
- length = object->nVertices - object->panPartStart[part_idx];
-
- part = build_vertex_list(object, vertex_idx, length);
- if (!part)
- goto fail;
-
- if (PyList_SetItem(result, part_idx, part) < 0)
- goto fail;
-
- vertex_idx += length;
- }
- }
- else
- {
- /* only one part. usual for SHPT_POINT */
- result = build_vertex_list(object, 0, object->nVertices);
- }
-
- return result;
-
- fail:
- Py_XDECREF(part);
- Py_DECREF(result);
- return NULL;
-}
-
-
-/* Return the length coordinates of the shape object starting at vertex
- * index as a Python-list of tuples. Helper function for
- * SHPObject_vertices.
- */
-static PyObject*
-build_vertex_list(SHPObject *object, int index, int length)
-{
- int i;
- PyObject * list;
- PyObject * vertex = NULL;
-
- list = PyList_New(length);
- if (!list)
- return NULL;
-
- for (i = 0; i < length; i++, index++)
- {
- vertex = Py_BuildValue("dd", object->padfX[index],
- object->padfY[index]);
- if (!vertex)
- goto fail;
- if (PyList_SetItem(list, i, vertex) < 0)
- goto fail;
- }
-
- return list;
-
- fail:
- Py_XDECREF(vertex);
- Py_DECREF(list);
- return NULL;
-}
-
-
-
-
-
-/* The constructor of SHPObject. parts is a list of lists of tuples
- * describing the parts and their vertices just likethe output of the
- * vertices() method. part_type_list is the list of part-types and may
- * be NULL. For the meaning of the part-types and their default value
- * see the Shaplib documentation.
- */
-SHPObject * new_SHPObject(int type, int id, PyObject * parts,
- PyObject * part_type_list)
-{
- /* arrays to hold thex and y coordinates of the vertices */
- double *xs = NULL, *ys = NULL;
- /* number of all vertices of all parts */
- int num_vertices;
- /* number of parts in the list parts */
- int num_parts;
- /* start index of in xs and ys of the part currently worked on */
- int part_start;
- /* array of start indices in xs and ys as expected by shapelib */
- int *part_starts = NULL;
-
- /* generic counter */
- int i;
-
- /* array of part types. holds the converted content of
- * part_type_list. Stays NULL of part_type_list is NULL
- */
- int *part_types = NULL;
-
- /* temporary python objects referring to the the list items being
- * worked on.
- */
- PyObject * part = NULL, *tuple = NULL;
-
- /* The result object */
- SHPObject *result;
-
- num_parts = PySequence_Length(parts);
- num_vertices = 0;
-
- /* parts and part_types have to have the same lengths */
- if (part_type_list
- && PySequence_Length(parts) != PySequence_Length(part_type_list))
- {
- PyErr_SetString(PyExc_TypeError,
- "parts and part_types have to have the same lengths");
- return NULL;
- }
-
- /* determine how many vertices there are altogether */
- for (i = 0; i < num_parts; i++)
- {
- PyObject * part = PySequence_GetItem(parts, i);
- if (!part)
- return NULL;
- num_vertices += PySequence_Length(part);
- Py_DECREF(part);
- }
-
- /* allocate the memory for the various arrays and check for memory
- errors */
- xs = malloc(num_vertices * sizeof(double));
- ys = malloc(num_vertices * sizeof(double));
- part_starts = malloc(num_parts * sizeof(int));
- if (part_type_list)
- part_types = malloc(num_parts * sizeof(int));
-
- if (!xs || !ys || !part_starts || (part_type_list && !part_types))
- {
- PyErr_NoMemory();
- goto fail;
- }
-
- /* convert the part types */
- if (part_type_list)
- {
- for (i = 0; i < num_parts; i++)
- {
- PyObject * otype = PySequence_GetItem(part_type_list, i);
- if (!otype)
- return NULL;
- part_types[i] = PyInt_AsLong(otype);
- Py_DECREF(otype);
- }
- }
-
- /* convert the list of parts */
- part_start = 0;
- for (i = 0; i < num_parts; i++)
- {
- int j, length;
-
- part = PySequence_GetItem(parts, i);
- length = PySequence_Length(part);
- part_starts[i] = part_start;
-
- for (j = 0; j < length; j++)
- {
- tuple = PySequence_GetItem(part, j);
- if (!tuple)
- goto fail;
-
- if (!PyArg_ParseTuple(tuple, "dd", xs + part_start + j,
- ys + part_start + j))
- {
- goto fail;
- }
- Py_DECREF(tuple);
- tuple = NULL;
- }
- Py_DECREF(part);
- part = NULL;
- part_start += length;
- }
-
- result = SHPCreateObject(type, id, num_parts, part_starts, part_types,
- num_vertices, xs, ys, NULL, NULL);
- free(xs);
- free(ys);
- free(part_starts);
- free(part_types);
- return result;
-
- fail:
- free(xs);
- free(ys);
- free(part_starts);
- free(part_types);
- Py_XDECREF(part);
- Py_XDECREF(tuple);
- return NULL;
-}
-
-%}
-
-
-
-/*
- * The SWIG Interface definition.
- */
-
-/* include some common SWIG type definitions and standard exception
- handling code */
-%include typemaps.i
-%include exception.i
-
-
-/*
- * SHPObject -- Represents one shape
- */
-
-/* Exception typemap for the SHPObject constructor. The constructor the
- the wrapper function defined above which returns NULL in case of
- error. */
-
-%typemap(python,except) SHPObject*new_SHPObject {
- $function;
- if (PyErr_Occurred())
- return NULL;
-}
-
-/* Define the SHPObject struct for SWIG. This has to have the same name
- * as the underlying C-struct in shapfil.h, but we don't have to repeat
- * all the fields here, only those we want to access directly, and we
- * can define methods for the object oriented interface.
- */
-
-typedef struct {
-
- /* The shape object has two read-only attributes: */
-
- /* The type of the shape. In the c-struct defined the field is
- * called 'nSHPType' but for the python bindings 'type' is more
- * appropriate.
- */
- %readonly %name(type) int nSHPType;
-
- /* The id of the shape. Here 'id' is a better name than 'nShapeId'. */
- %readonly %name(id) int nShapeId;
-
- /* The methods */
- %addmethods {
-
- /* the constructor */
- SHPObject(int type, int id, PyObject * parts,
- PyObject * part_types = NULL);
-
- /* The destructor */
- ~SHPObject();
-
- /* extents and vertices correspond to the SHPObject_extents and
- * SHPObject_vertices defined above
- */
- PyObject *extents();
- PyObject *vertices();
- }
-} SHPObject;
-
-
-/*
- * ShapeFile -- Represents the shape file
- */
-
-/* Here we do things a little different. We define a new C-struct that
- * holds the SHPHandle. This is mainly done so we can separate the
- * close() method from the destructor but it also helps with exception
- * handling.
- *
- * After the ShapeFile has been opened or created the handle is not
- * NULL. The close() method closes the file and sets handle to NULL as
- * an indicator that the file has been closed.
- */
-
-/* First, define the C-struct */
-%{
- typedef struct {
- SHPHandle handle;
- } ShapeFile;
-%}
-
-/* define and use some typemaps for the info() method whose
- * C-implementation has four output parameters that are returned through
- * pointers passed into the function. SWIG already has definitions for
- * common types such as int* and we can use those for the first two
- * parameters:
- */
-
-%apply int * OUTPUT { int * output_entities }
-%apply int * OUTPUT { int * output_type }
-
-/* for the last two, the 4-element arrays of min- and max-values, we
- * have to define our own typemaps:
- */
-%typemap (python,ignore) double * extents(double temp[4]) {
- $target = temp;
-}
-
-%typemap (python,argout) double * extents {
- PyObject * list = Py_BuildValue("[dddd]",
- $source[0], $source[1],
- $source[2], $source[3]);
- $target = t_output_helper($target,list);
-}
-
-%apply double * extents { double * output_min_bounds }
-%apply double * extents { double * output_max_bounds }
-
-/* The first argument to the ShapeFile methods is a ShapeFile pointer.
- * We have to check whether handle is not NULL in most methods but not
- * all. In the destructor and the close method, it's OK for handle to be
- * NULL. We achieve this by checking whether the preprocessor macro
- * NOCHECK_$name is defined. SWIG replaces $name with the name of the
- * function for which the code is inserted. In the %{,%}-block below we
- * define the macros for the destructor and the close() method.
- */
-
-
-%typemap(python,check) ShapeFile *{
- %#ifndef NOCHECK_$name
- if (!$target || !$target->handle)
- SWIG_exception(SWIG_TypeError, "shapefile already closed");
- %#endif
-}
-
-%{
-#define NOCHECK_delete_ShapeFile
-#define NOCHECK_ShapeFile_close
-%}
-
-/* An exception handle for the constructor and the module level open()
- * and create() functions.
- *
- * Annoyingly, we *have* to put braces around the SWIG_exception()
- * calls, at least in the python case, because of the way the macro is
- * written. Of course, always putting braces around the branches of an
- * if-statement is often considered good practice.
- */
-%typemap(python,except) ShapeFile * {
- $function;
- if (!$source)
- {
- SWIG_exception(SWIG_MemoryError, "no memory");
- }
- else if (!$source->handle)
- {
- SWIG_exception(SWIG_IOError, "$name failed");
- }
-}
-
-
-/*
- * The SWIG-version of the ShapeFile struct.
- */
-
-typedef struct
-{
- /* Only methods and no attributes here: */
- %addmethods {
-
- /* The constructor. Takes two arguments, the filename and the
- * optinal mode which are passed through to SHPOpen (due to the
- * renaming trick)
- */
- ShapeFile(char *file, char * mode = "rb") {
- ShapeFile * self = malloc(sizeof(ShapeFile));
- if (self)
- self->handle = SHPOpen(file, mode);
- return self;
- }
-
- /* The destructor. Equivalent to SHPClose */
- ~ShapeFile() {
- if (self->handle)
- SHPClose(self->handle);
- free(self);
- }
-
- /* close the shape file and set handle to NULL */
- void close() {
- if (self->handle)
- {
- SHPClose(self->handle);
- self->handle = NULL;
- }
- }
-
- /* info() -- Return a tuple (NUM_SHAPES, TYPE, MIN, MAX) where
- * NUM_SHAPES is the number of shapes in the file, TYPE is the
- * shape type and MIN and MAX are 4-element lists with the min.
- * and max. values of the data.
- *
- * The arguments of the underlying shapelib function SHPGetInfo
- * are all output parameters. To tell SWIG this, we have defined
- * some typemaps above
- */
- void info(int * output_entities, int * output_type,
- double * output_min_bounds, double *output_max_bounds) {
- SHPGetInfo(self->handle, output_entities, output_type,
- output_min_bounds, output_max_bounds);
- }
-
- /* Return object number i */
- %new SHPObject * read_object(int i) {
- return SHPReadObject(self->handle, i);
- }
-
- /* Write an object */
- int write_object(int iShape, SHPObject * psObject) {
- return SHPWriteObject(self->handle, iShape, psObject);
- }
-
- /* Return the shapelib SHPHandle as a Python CObject */
- PyObject * cobject() {
- return PyCObject_FromVoidPtr(self->handle, NULL);
- }
- }
-
-} ShapeFile;
-
-
-/*
- * Two module level functions, open() and create() that correspond to
- * SHPOpen and SHPCreate respectively. open() is equivalent to the
- * ShapeFile constructor.
- */
-
-%{
- ShapeFile * open_ShapeFile(const char *filename, const char * mode) {
- ShapeFile * self = malloc(sizeof(ShapeFile));
- if (self)
- self->handle = SHPOpen(filename, mode);
- return self;
- }
-%}
-
-%name(open) %new ShapeFile *open_ShapeFile(const char *filename,
- const char * mode = "rb");
-
-
-%{
- ShapeFile * create_ShapeFile(const char *filename, int type) {
- ShapeFile * self = malloc(sizeof(ShapeFile));
- if (self)
- self->handle = SHPCreate(filename, type);
- return self;
- }
-%}
-
-%name(create) %new ShapeFile * create_ShapeFile(const char *filename,
- int type);
-
-
-/* Module level function to expose some of the shapelib functions linked
- * with the shapefile C-module to other Python extension modules. This
- * is a kludge to make a Thuban extension work that reads shapes from
- * shapefiles opened by the shapefile module.
- */
-
-%{
- static PyShapeLibAPI the_api = {
- SHPReadObject,
- SHPDestroyObject,
- SHPCreateTree,
- SHPDestroyTree,
- SHPTreeFindLikelyShapes
- };
-
- PyObject * c_api() {
- return PyCObject_FromVoidPtr(&the_api, NULL);
- }
-%}
-
-PyObject * c_api();
-
-
-/*
- * Module Level functions
- */
-
-/* convert shapefile types to names */
-%name(type_name) const char *SHPTypeName(int nSHPType);
-%name(part_type_name) const char *SHPPartTypeName(int nPartType);
-
-
-/*
- * Finally, constants copied from shapefil.h
- */
-
-/* -------------------------------------------------------------------- */
-/* Shape types (nSHPType) */
-/* -------------------------------------------------------------------- */
-#define SHPT_NULL 0
-#define SHPT_POINT 1
-#define SHPT_ARC 3
-#define SHPT_POLYGON 5
-#define SHPT_MULTIPOINT 8
-#define SHPT_POINTZ 11
-#define SHPT_ARCZ 13
-#define SHPT_POLYGONZ 15
-#define SHPT_MULTIPOINTZ 18
-#define SHPT_POINTM 21
-#define SHPT_ARCM 23
-#define SHPT_POLYGONM 25
-#define SHPT_MULTIPOINTM 28
-#define SHPT_MULTIPATCH 31
-
-
-/* -------------------------------------------------------------------- */
-/* Part types - everything but SHPT_MULTIPATCH just uses */
-/* SHPP_RING. */
-/* -------------------------------------------------------------------- */
-
-#define SHPP_TRISTRIP 0
-#define SHPP_TRIFAN 1
-#define SHPP_OUTERRING 2
-#define SHPP_INNERRING 3
-#define SHPP_FIRSTRING 4
-#define SHPP_RING 5
-
-
Deleted: trunk/thuban/libraries/pyshapelib/shapelib.py
===================================================================
--- trunk/thuban/libraries/pyshapelib/shapelib.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/shapelib.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,119 +0,0 @@
-# This file was created automatically by SWIG.
-import shapelibc
-class SHPObject:
- def __init__(self,*args):
- self.this = apply(shapelibc.new_SHPObject,args)
- self.thisown = 1
-
- def __del__(self,shapelibc=shapelibc):
- if self.thisown == 1 :
- shapelibc.delete_SHPObject(self)
- def extents(*args):
- val = apply(shapelibc.SHPObject_extents,args)
- return val
- def vertices(*args):
- val = apply(shapelibc.SHPObject_vertices,args)
- return val
- __setmethods__ = {
- }
- def __setattr__(self,name,value):
- if (name == "this") or (name == "thisown"): self.__dict__[name] = value; return
- method = SHPObject.__setmethods__.get(name,None)
- if method: return method(self,value)
- self.__dict__[name] = value
- __getmethods__ = {
- "type" : shapelibc.SHPObject_type_get,
- "id" : shapelibc.SHPObject_id_get,
- }
- def __getattr__(self,name):
- method = SHPObject.__getmethods__.get(name,None)
- if method: return method(self)
- raise AttributeError,name
- def __repr__(self):
- return "<C SHPObject instance at %s>" % (self.this,)
-class SHPObjectPtr(SHPObject):
- def __init__(self,this):
- self.this = this
- self.thisown = 0
- self.__class__ = SHPObject
-
-
-
-class ShapeFile:
- def __init__(self,*args):
- self.this = apply(shapelibc.new_ShapeFile,args)
- self.thisown = 1
-
- def __del__(self,shapelibc=shapelibc):
- if self.thisown == 1 :
- shapelibc.delete_ShapeFile(self)
- def close(*args):
- val = apply(shapelibc.ShapeFile_close,args)
- return val
- def info(*args):
- val = apply(shapelibc.ShapeFile_info,args)
- return val
- def read_object(*args):
- val = apply(shapelibc.ShapeFile_read_object,args)
- if val: val = SHPObjectPtr(val) ; val.thisown = 1
- return val
- def write_object(*args):
- val = apply(shapelibc.ShapeFile_write_object,args)
- return val
- def cobject(*args):
- val = apply(shapelibc.ShapeFile_cobject,args)
- return val
- def __repr__(self):
- return "<C ShapeFile instance at %s>" % (self.this,)
-class ShapeFilePtr(ShapeFile):
- def __init__(self,this):
- self.this = this
- self.thisown = 0
- self.__class__ = ShapeFile
-
-
-
-
-
-#-------------- FUNCTION WRAPPERS ------------------
-
-def open(*args, **kwargs):
- val = apply(shapelibc.open,args,kwargs)
- if val: val = ShapeFilePtr(val); val.thisown = 1
- return val
-
-def create(*args, **kwargs):
- val = apply(shapelibc.create,args,kwargs)
- if val: val = ShapeFilePtr(val); val.thisown = 1
- return val
-
-c_api = shapelibc.c_api
-
-type_name = shapelibc.type_name
-
-part_type_name = shapelibc.part_type_name
-
-
-
-#-------------- VARIABLE WRAPPERS ------------------
-
-SHPT_NULL = shapelibc.SHPT_NULL
-SHPT_POINT = shapelibc.SHPT_POINT
-SHPT_ARC = shapelibc.SHPT_ARC
-SHPT_POLYGON = shapelibc.SHPT_POLYGON
-SHPT_MULTIPOINT = shapelibc.SHPT_MULTIPOINT
-SHPT_POINTZ = shapelibc.SHPT_POINTZ
-SHPT_ARCZ = shapelibc.SHPT_ARCZ
-SHPT_POLYGONZ = shapelibc.SHPT_POLYGONZ
-SHPT_MULTIPOINTZ = shapelibc.SHPT_MULTIPOINTZ
-SHPT_POINTM = shapelibc.SHPT_POINTM
-SHPT_ARCM = shapelibc.SHPT_ARCM
-SHPT_POLYGONM = shapelibc.SHPT_POLYGONM
-SHPT_MULTIPOINTM = shapelibc.SHPT_MULTIPOINTM
-SHPT_MULTIPATCH = shapelibc.SHPT_MULTIPATCH
-SHPP_TRISTRIP = shapelibc.SHPP_TRISTRIP
-SHPP_TRIFAN = shapelibc.SHPP_TRIFAN
-SHPP_OUTERRING = shapelibc.SHPP_OUTERRING
-SHPP_INNERRING = shapelibc.SHPP_INNERRING
-SHPP_FIRSTRING = shapelibc.SHPP_FIRSTRING
-SHPP_RING = shapelibc.SHPP_RING
Deleted: trunk/thuban/libraries/pyshapelib/shapelib_wrap.c
===================================================================
--- trunk/thuban/libraries/pyshapelib/shapelib_wrap.c 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/shapelib_wrap.c 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,1411 +0,0 @@
-/* ----------------------------------------------------------------------------
- * This file was automatically generated by SWIG (http://www.swig.org).
- * Version 1.3u-20050630-1524 (Alpha 5)
- *
- * This file is not intended to be easily readable and contains a number of
- * coding conventions designed to improve portability and efficiency. Do not make
- * changes to this file unless you know what you are doing--modify the SWIG
- * interface file instead.
- * ----------------------------------------------------------------------------- */
-
-#define SWIGPYTHON
-/***********************************************************************
- * common.swg
- *
- * This file contains generic SWIG runtime support for pointer
- * type checking as well as a few commonly used macros to control
- * external linkage.
- *
- * Author : David Beazley (beazley at cs.uchicago.edu)
- *
- * Copyright (c) 1999-2000, The University of Chicago
- *
- * This file may be freely redistributed without license or fee provided
- * this copyright message remains intact.
- ************************************************************************/
-
-#include <string.h>
-
-#if defined(_WIN32) || defined(__WIN32__)
-# if defined(_MSC_VER)
-# if defined(STATIC_LINKED)
-# define SWIGEXPORT(a) a
-# else
-# define SWIGEXPORT(a) __declspec(dllexport) a
-# endif
-# else
-# if defined(__BORLANDC__)
-# define SWIGEXPORT(a) a _export
-# else
-# define SWIGEXPORT(a) a
-# endif
-#endif
-#else
-# define SWIGEXPORT(a) a
-#endif
-
-#ifdef SWIG_GLOBAL
-#define SWIGRUNTIME(a) SWIGEXPORT(a)
-#else
-#define SWIGRUNTIME(a) static a
-#endif
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-typedef struct swig_type_info {
- char *name;
- void *(*converter)(void *);
- char *str;
- struct swig_type_info *next;
- struct swig_type_info *prev;
-} swig_type_info;
-
-#ifdef SWIG_NOINCLUDE
-SWIGEXPORT(swig_type_info *) SWIG_TypeRegister(swig_type_info *);
-SWIGEXPORT(swig_type_info *) SWIG_TypeCheck(char *c, swig_type_info *);
-SWIGEXPORT(void *) SWIG_TypeCast(swig_type_info *, void *);
-#else
-
-static swig_type_info *swig_type_list = 0;
-
-/* Register a type mapping with the type-checking */
-SWIGRUNTIME(swig_type_info *)
-SWIG_TypeRegister(swig_type_info *ti)
-{
- swig_type_info *tc, *head, *ret, *next;
- /* Check to see if this type has already been registered */
- tc = swig_type_list;
- while (tc) {
- if (strcmp(tc->name, ti->name) == 0) {
- /* Already exists in the table. Just add additional types to the list */
- head = tc;
- next = tc->next;
- goto l1;
- }
- tc = tc->prev;
- }
- head = ti;
- next = 0;
-
- /* Place in list */
- ti->prev = swig_type_list;
- swig_type_list = ti;
-
- /* Build linked lists */
- l1:
- ret = head;
- tc = ti + 1;
- /* Patch up the rest of the links */
- while (tc->name) {
- head->next = tc;
- tc->prev = head;
- head = tc;
- tc++;
- }
- head->next = next;
- return ret;
-}
-
-/* Check the typename */
-SWIGRUNTIME(swig_type_info *)
-SWIG_TypeCheck(char *c, swig_type_info *ty)
-{
- swig_type_info *s;
- if (!ty) return 0; /* Void pointer */
- s = ty->next; /* First element always just a name */
- while (s) {
- if (strcmp(s->name,c) == 0) {
- if (s == ty->next) return s;
- /* Move s to the top of the linked list */
- s->prev->next = s->next;
- if (s->next) {
- s->next->prev = s->prev;
- }
- /* Insert s as second element in the list */
- s->next = ty->next;
- if (ty->next) ty->next->prev = s;
- ty->next = s;
- return s;
- }
- s = s->next;
- }
- return 0;
-}
-
-/* Cast a pointer (needed for C++ inheritance */
-SWIGRUNTIME(void *)
-SWIG_TypeCast(swig_type_info *ty, void *ptr)
-{
- if ((!ty) || (!ty->converter)) return ptr;
- return (*ty->converter)(ptr);
-}
-
-/* Search for a swig_type_info structure */
-SWIGRUNTIME(void *)
-SWIG_TypeQuery(const char *name) {
- swig_type_info *ty = swig_type_list;
- while (ty) {
- if (ty->str && (strcmp(name,ty->str) == 0)) return ty;
- if (ty->name && (strcmp(name,ty->name) == 0)) return ty;
- ty = ty->prev;
- }
- return 0;
-}
-
-#endif
-
-#ifdef __cplusplus
-}
-#endif
-
-
-
-/***********************************************************************
- * python.swg
- *
- * This file contains the runtime support for Python modules
- * and includes code for managing global variables and pointer
- * type checking.
- *
- * Author : David Beazley (beazley at cs.uchicago.edu)
- ************************************************************************/
-
-#include <stdlib.h>
-#include "Python.h"
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#define SWIG_PY_INT 1
-#define SWIG_PY_FLOAT 2
-#define SWIG_PY_STRING 3
-#define SWIG_PY_POINTER 4
-
-/* Constant information structure */
-typedef struct swig_const_info {
- int type;
- char *name;
- long lvalue;
- double dvalue;
- void *pvalue;
- swig_type_info **ptype;
-} swig_const_info;
-
-#ifdef SWIG_NOINCLUDE
-
-SWIGEXPORT(PyObject *) SWIG_newvarlink();
-SWIGEXPORT(void) SWIG_addvarlink(PyObject *, char *, PyObject *(*)(void), int (*)(PyObject *));
-SWIGEXPORT(int) SWIG_ConvertPtr(PyObject *, void **, swig_type_info *, int);
-SWIGEXPORT(void) SWIG_MakePtr(char *c, void *, swig_type_info *);
-SWIGEXPORT(PyObject *) SWIG_NewPointerObj(void *, swig_type_info *);
-SWIGEXPORT(void) SWIG_InstallConstants(PyObject *d, swig_const_info constants[]);
-
-#else
-
-/* -----------------------------------------------------------------------------
- * global variable support code.
- * ----------------------------------------------------------------------------- */
-
-typedef struct swig_globalvar {
- char *name; /* Name of global variable */
- PyObject *(*get_attr)(void); /* Return the current value */
- int (*set_attr)(PyObject *); /* Set the value */
- struct swig_globalvar *next;
-} swig_globalvar;
-
-typedef struct swig_varlinkobject {
- PyObject_HEAD
- swig_globalvar *vars;
-} swig_varlinkobject;
-
-static PyObject *
-swig_varlink_repr(swig_varlinkobject *v) {
- v = v;
- return PyString_FromString("<Global variables>");
-}
-
-static int
-swig_varlink_print(swig_varlinkobject *v, FILE *fp, int flags) {
- swig_globalvar *var;
- flags = flags;
- fprintf(fp,"Global variables { ");
- for (var = v->vars; var; var=var->next) {
- fprintf(fp,"%s", var->name);
- if (var->next) fprintf(fp,", ");
- }
- fprintf(fp," }\n");
- return 0;
-}
-
-static PyObject *
-swig_varlink_getattr(swig_varlinkobject *v, char *n) {
- swig_globalvar *var = v->vars;
- while (var) {
- if (strcmp(var->name,n) == 0) {
- return (*var->get_attr)();
- }
- var = var->next;
- }
- PyErr_SetString(PyExc_NameError,"Unknown C global variable");
- return NULL;
-}
-
-static int
-swig_varlink_setattr(swig_varlinkobject *v, char *n, PyObject *p) {
- swig_globalvar *var = v->vars;
- while (var) {
- if (strcmp(var->name,n) == 0) {
- return (*var->set_attr)(p);
- }
- var = var->next;
- }
- PyErr_SetString(PyExc_NameError,"Unknown C global variable");
- return 1;
-}
-
-statichere PyTypeObject varlinktype = {
- PyObject_HEAD_INIT(0)
- 0,
- "swigvarlink", /* Type name */
- sizeof(swig_varlinkobject), /* Basic size */
- 0, /* Itemsize */
- 0, /* Deallocator */
- (printfunc) swig_varlink_print, /* Print */
- (getattrfunc) swig_varlink_getattr, /* get attr */
- (setattrfunc) swig_varlink_setattr, /* Set attr */
- 0, /* tp_compare */
- (reprfunc) swig_varlink_repr, /* tp_repr */
- 0, /* tp_as_number */
- 0, /* tp_as_mapping*/
- 0, /* tp_hash */
-};
-
-/* Create a variable linking object for use later */
-SWIGRUNTIME(PyObject *)
-SWIG_newvarlink(void) {
- swig_varlinkobject *result = 0;
- result = PyMem_NEW(swig_varlinkobject,1);
- varlinktype.ob_type = &PyType_Type; /* Patch varlinktype into a PyType */
- result->ob_type = &varlinktype;
- result->vars = 0;
- result->ob_refcnt = 0;
- Py_XINCREF((PyObject *) result);
- return ((PyObject*) result);
-}
-
-SWIGRUNTIME(void)
-SWIG_addvarlink(PyObject *p, char *name,
- PyObject *(*get_attr)(void), int (*set_attr)(PyObject *p)) {
- swig_varlinkobject *v;
- swig_globalvar *gv;
- v= (swig_varlinkobject *) p;
- gv = (swig_globalvar *) malloc(sizeof(swig_globalvar));
- gv->name = (char *) malloc(strlen(name)+1);
- strcpy(gv->name,name);
- gv->get_attr = get_attr;
- gv->set_attr = set_attr;
- gv->next = v->vars;
- v->vars = gv;
-}
-/* Convert a pointer value */
-SWIGRUNTIME(int)
-SWIG_ConvertPtr(PyObject *obj, void **ptr, swig_type_info *ty, int flags) {
- unsigned long p;
- register int d;
- swig_type_info *tc;
- char *c;
- static PyObject *SWIG_this = 0;
- int newref = 0;
-
- if (!obj || (obj == Py_None)) {
- *ptr = 0;
- return 0;
- }
-#ifdef SWIG_COBJECT_TYPES
- if (!(PyCObject_Check(obj))) {
- if (!SWIG_this)
- SWIG_this = PyString_InternFromString("this");
- obj = PyObject_GetAttr(obj,SWIG_this);
- newref = 1;
- if (!obj) goto type_error;
- if (!PyCObject_Check(obj)) {
- Py_DECREF(obj);
- goto type_error;
- }
- }
- *ptr = PyCObject_AsVoidPtr(obj);
- c = (char *) PyCObject_GetDesc(obj);
- if (newref) Py_DECREF(obj);
- goto cobject;
-#else
- if (!(PyString_Check(obj))) {
- if (!SWIG_this)
- SWIG_this = PyString_InternFromString("this");
- obj = PyObject_GetAttr(obj,SWIG_this);
- newref = 1;
- if (!obj) goto type_error;
- if (!PyString_Check(obj)) {
- Py_DECREF(obj);
- goto type_error;
- }
- }
- c = PyString_AsString(obj);
- p = 0;
- /* Pointer values must start with leading underscore */
- if (*c != '_') {
- *ptr = (void *) 0;
- if (strcmp(c,"NULL") == 0) {
- if (newref) Py_DECREF(obj);
- return 0;
- } else {
- if (newref) Py_DECREF(obj);
- goto type_error;
- }
- }
- c++;
- /* Extract hex value from pointer */
- while ((d = *c)) {
- if ((d >= '0') && (d <= '9'))
- p = (p << 4) + (d - '0');
- else if ((d >= 'a') && (d <= 'f'))
- p = (p << 4) + (d - ('a'-10));
- else
- break;
- c++;
- }
- *ptr = (void *) p;
- if (newref) Py_DECREF(obj);
-#endif
-
-#ifdef SWIG_COBJECT_TYPES
-cobject:
-#endif
-
- if (ty) {
- tc = SWIG_TypeCheck(c,ty);
- if (!tc) goto type_error;
- *ptr = SWIG_TypeCast(tc,(void*)p);
- }
- return 0;
-
-type_error:
-
- if (flags) {
- if (ty) {
- char *temp = (char *) malloc(64+strlen(ty->name));
- sprintf(temp,"Type error. Expected %s", ty->name);
- PyErr_SetString(PyExc_TypeError, temp);
- free((char *) temp);
- } else {
- PyErr_SetString(PyExc_TypeError,"Expected a pointer");
- }
- }
- return -1;
-}
-
-/* Take a pointer and convert it to a string */
-SWIGRUNTIME(void)
-SWIG_MakePtr(char *c, void *ptr, swig_type_info *ty) {
- static char hex[17] = "0123456789abcdef";
- unsigned long p, s;
- char result[32], *r;
- r = result;
- p = (unsigned long) ptr;
- if (p > 0) {
- while (p > 0) {
- s = p & 0xf;
- *(r++) = hex[s];
- p = p >> 4;
- }
- *r = '_';
- while (r >= result)
- *(c++) = *(r--);
- strcpy (c, ty->name);
- } else {
- strcpy (c, "NULL");
- }
-}
-
-/* Create a new pointer object */
-SWIGRUNTIME(PyObject *)
-SWIG_NewPointerObj(void *ptr, swig_type_info *type) {
- char result[512];
- PyObject *robj;
- if (!ptr) {
- Py_INCREF(Py_None);
- return Py_None;
- }
-#ifdef SWIG_COBJECT_TYPES
- robj = PyCObject_FromVoidPtrAndDesc((void *) ptr, type->name, NULL);
-#else
- SWIG_MakePtr(result,ptr,type);
- robj = PyString_FromString(result);
-#endif
- return robj;
-}
-
-/* Install Constants */
-SWIGRUNTIME(void)
-SWIG_InstallConstants(PyObject *d, swig_const_info constants[]) {
- int i;
- PyObject *obj;
- for (i = 0; constants[i].type; i++) {
- switch(constants[i].type) {
- case SWIG_PY_INT:
- obj = PyInt_FromLong(constants[i].lvalue);
- break;
- case SWIG_PY_FLOAT:
- obj = PyFloat_FromDouble(constants[i].dvalue);
- break;
- case SWIG_PY_STRING:
- obj = PyString_FromString((char *) constants[i].pvalue);
- break;
- case SWIG_PY_POINTER:
- obj = SWIG_NewPointerObj(constants[i].pvalue, *(constants[i]).ptype);
- break;
- default:
- obj = 0;
- break;
- }
- if (obj) {
- PyDict_SetItemString(d,constants[i].name,obj);
- Py_DECREF(obj);
- }
- }
-}
-
-#endif
-
-#ifdef __cplusplus
-}
-#endif
-
-
-
-/* -------- TYPES TABLE (BEGIN) -------- */
-
-#define SWIGTYPE_p_ShapeFile swig_types[0]
-#define SWIGTYPE_p_SHPObject swig_types[1]
-static swig_type_info *swig_types[3];
-
-/* -------- TYPES TABLE (END) -------- */
-
-
-/*-----------------------------------------------
- @(target):= shapelibc.so
- ------------------------------------------------*/
-#define SWIG_init initshapelibc
-
-#define SWIG_name "shapelibc"
-
-
-/* import the shapelib headefile. */
-#include "shapefil.h"
-#include "pyshapelib_api.h"
-
-/*
- * Rename a few shapelib functions that are effectively methods with
- * preprocessor macros so that they have the names that swig expects
- * (e.g. the destructor of SHPObject has to be called delete_SHPObject)
- */
-
-#define delete_SHPObject SHPDestroyObject
-
-/*
- * The extents() method of SHPObject.
- *
- * Return the extents as a tuple of two 4-element lists with the min.
- * and max. values of x, y, z, m.
- */
-static PyObject *
-SHPObject_extents(SHPObject *object)
-{
- return Py_BuildValue("[dddd][dddd]",
- object->dfXMin, object->dfYMin, object->dfZMin,
- object->dfMMin,
- object->dfXMax, object->dfYMax, object->dfZMax,
- object->dfMMax);
-}
-
-
-/*
- * The vertices() method of SHPObject.
- *
- * Return the x and y coords of the vertices as a list of lists of
- * tuples.
- */
-
-static PyObject* build_vertex_list(SHPObject *object, int index, int length);
-
-static PyObject*
-SHPObject_vertices(SHPObject *object)
-{
- PyObject *result = NULL;
- PyObject *part = NULL;
- int part_idx, vertex_idx;
- int length = 0;
-
-
- if (object->nParts > 0)
- {
- /* A multipart shape. Usual for SHPT_ARC and SHPT_POLYGON */
-
- result = PyList_New(object->nParts);
- if (!result)
- return NULL;
-
- for (part_idx = 0, vertex_idx = 0; part_idx < object->nParts;
- part_idx++)
- {
- if (part_idx < object->nParts - 1)
- length = (object->panPartStart[part_idx + 1]
- - object->panPartStart[part_idx]);
- else
- length = object->nVertices - object->panPartStart[part_idx];
-
- part = build_vertex_list(object, vertex_idx, length);
- if (!part)
- goto fail;
-
- if (PyList_SetItem(result, part_idx, part) < 0)
- goto fail;
-
- vertex_idx += length;
- }
- }
- else
- {
- /* only one part. usual for SHPT_POINT */
- result = build_vertex_list(object, 0, object->nVertices);
- }
-
- return result;
-
- fail:
- Py_XDECREF(part);
- Py_DECREF(result);
- return NULL;
-}
-
-
-/* Return the length coordinates of the shape object starting at vertex
- * index as a Python-list of tuples. Helper function for
- * SHPObject_vertices.
- */
-static PyObject*
-build_vertex_list(SHPObject *object, int index, int length)
-{
- int i;
- PyObject * list;
- PyObject * vertex = NULL;
-
- list = PyList_New(length);
- if (!list)
- return NULL;
-
- for (i = 0; i < length; i++, index++)
- {
- vertex = Py_BuildValue("dd", object->padfX[index],
- object->padfY[index]);
- if (!vertex)
- goto fail;
- if (PyList_SetItem(list, i, vertex) < 0)
- goto fail;
- }
-
- return list;
-
- fail:
- Py_XDECREF(vertex);
- Py_DECREF(list);
- return NULL;
-}
-
-
-
-
-
-/* The constructor of SHPObject. parts is a list of lists of tuples
- * describing the parts and their vertices just likethe output of the
- * vertices() method. part_type_list is the list of part-types and may
- * be NULL. For the meaning of the part-types and their default value
- * see the Shaplib documentation.
- */
-SHPObject * new_SHPObject(int type, int id, PyObject * parts,
- PyObject * part_type_list)
-{
- /* arrays to hold thex and y coordinates of the vertices */
- double *xs = NULL, *ys = NULL;
- /* number of all vertices of all parts */
- int num_vertices;
- /* number of parts in the list parts */
- int num_parts;
- /* start index of in xs and ys of the part currently worked on */
- int part_start;
- /* array of start indices in xs and ys as expected by shapelib */
- int *part_starts = NULL;
-
- /* generic counter */
- int i;
-
- /* array of part types. holds the converted content of
- * part_type_list. Stays NULL of part_type_list is NULL
- */
- int *part_types = NULL;
-
- /* temporary python objects referring to the the list items being
- * worked on.
- */
- PyObject * part = NULL, *tuple = NULL;
-
- /* The result object */
- SHPObject *result;
-
- num_parts = PySequence_Length(parts);
- num_vertices = 0;
-
- /* parts and part_types have to have the same lengths */
- if (part_type_list
- && PySequence_Length(parts) != PySequence_Length(part_type_list))
- {
- PyErr_SetString(PyExc_TypeError,
- "parts and part_types have to have the same lengths");
- return NULL;
- }
-
- /* determine how many vertices there are altogether */
- for (i = 0; i < num_parts; i++)
- {
- PyObject * part = PySequence_GetItem(parts, i);
- if (!part)
- return NULL;
- num_vertices += PySequence_Length(part);
- Py_DECREF(part);
- }
-
- /* allocate the memory for the various arrays and check for memory
- errors */
- xs = malloc(num_vertices * sizeof(double));
- ys = malloc(num_vertices * sizeof(double));
- part_starts = malloc(num_parts * sizeof(int));
- if (part_type_list)
- part_types = malloc(num_parts * sizeof(int));
-
- if (!xs || !ys || !part_starts || (part_type_list && !part_types))
- {
- PyErr_NoMemory();
- goto fail;
- }
-
- /* convert the part types */
- if (part_type_list)
- {
- for (i = 0; i < num_parts; i++)
- {
- PyObject * otype = PySequence_GetItem(part_type_list, i);
- if (!otype)
- return NULL;
- part_types[i] = PyInt_AsLong(otype);
- Py_DECREF(otype);
- }
- }
-
- /* convert the list of parts */
- part_start = 0;
- for (i = 0; i < num_parts; i++)
- {
- int j, length;
-
- part = PySequence_GetItem(parts, i);
- length = PySequence_Length(part);
- part_starts[i] = part_start;
-
- for (j = 0; j < length; j++)
- {
- tuple = PySequence_GetItem(part, j);
- if (!tuple)
- goto fail;
-
- if (!PyArg_ParseTuple(tuple, "dd", xs + part_start + j,
- ys + part_start + j))
- {
- goto fail;
- }
- Py_DECREF(tuple);
- tuple = NULL;
- }
- Py_DECREF(part);
- part = NULL;
- part_start += length;
- }
-
- result = SHPCreateObject(type, id, num_parts, part_starts, part_types,
- num_vertices, xs, ys, NULL, NULL);
- free(xs);
- free(ys);
- free(part_starts);
- free(part_types);
- return result;
-
- fail:
- free(xs);
- free(ys);
- free(part_starts);
- free(part_types);
- Py_XDECREF(part);
- Py_XDECREF(tuple);
- return NULL;
-}
-
-
-static PyObject* l_output_helper(PyObject* target, PyObject* o) {
- PyObject* o2;
- if (!target) {
- target = o;
- } else if (target == Py_None) {
- Py_DECREF(Py_None);
- target = o;
- } else {
- if (!PyList_Check(target)) {
- o2 = target;
- target = PyList_New(0);
- PyList_Append(target, o2);
- Py_XDECREF(o2);
- }
- PyList_Append(target,o);
- Py_XDECREF(o);
- }
- return target;
-}
-
-static PyObject* t_output_helper(PyObject* target, PyObject* o) {
- PyObject* o2;
- PyObject* o3;
-
- if (!target) {
- target = o;
- } else if (target == Py_None) {
- Py_DECREF(Py_None);
- target = o;
- } else {
- if (!PyTuple_Check(target)) {
- o2 = target;
- target = PyTuple_New(1);
- PyTuple_SetItem(target, 0, o2);
- }
- o3 = PyTuple_New(1);
- PyTuple_SetItem(o3, 0, o);
-
- o2 = target;
- target = PySequence_Concat(o2, o3);
- Py_DECREF(o2);
- Py_DECREF(o3);
- }
- return target;
-}
-
-#define SWIG_MemoryError 1
-#define SWIG_IOError 2
-#define SWIG_RuntimeError 3
-#define SWIG_IndexError 4
-#define SWIG_TypeError 5
-#define SWIG_DivisionByZero 6
-#define SWIG_OverflowError 7
-#define SWIG_SyntaxError 8
-#define SWIG_ValueError 9
-#define SWIG_SystemError 10
-#define SWIG_UnknownError 99
-
-static void _SWIG_exception(int code, char *msg) {
- switch(code) {
- case SWIG_MemoryError:
- PyErr_SetString(PyExc_MemoryError,msg);
- break;
- case SWIG_IOError:
- PyErr_SetString(PyExc_IOError,msg);
- break;
- case SWIG_RuntimeError:
- PyErr_SetString(PyExc_RuntimeError,msg);
- break;
- case SWIG_IndexError:
- PyErr_SetString(PyExc_IndexError,msg);
- break;
- case SWIG_TypeError:
- PyErr_SetString(PyExc_TypeError,msg);
- break;
- case SWIG_DivisionByZero:
- PyErr_SetString(PyExc_ZeroDivisionError,msg);
- break;
- case SWIG_OverflowError:
- PyErr_SetString(PyExc_OverflowError,msg);
- break;
- case SWIG_SyntaxError:
- PyErr_SetString(PyExc_SyntaxError,msg);
- break;
- case SWIG_ValueError:
- PyErr_SetString(PyExc_ValueError,msg);
- break;
- case SWIG_SystemError:
- PyErr_SetString(PyExc_SystemError,msg);
- break;
- default:
- PyErr_SetString(PyExc_RuntimeError,msg);
- break;
- }
-}
-
-#define SWIG_exception(a,b) { _SWIG_exception(a,b); return NULL; }
-
- typedef struct {
- SHPHandle handle;
- } ShapeFile;
-
-#define NOCHECK_delete_ShapeFile
-#define NOCHECK_ShapeFile_close
-
- ShapeFile * open_ShapeFile(const char *filename, const char * mode) {
- ShapeFile * self = malloc(sizeof(ShapeFile));
- if (self)
- self->handle = SHPOpen(filename, mode);
- return self;
- }
-
- ShapeFile * create_ShapeFile(const char *filename, int type) {
- ShapeFile * self = malloc(sizeof(ShapeFile));
- if (self)
- self->handle = SHPCreate(filename, type);
- return self;
- }
-
- static PyShapeLibAPI the_api = {
- SHPReadObject,
- SHPDestroyObject,
- SHPCreateTree,
- SHPDestroyTree,
- SHPTreeFindLikelyShapes
- };
-
- PyObject * c_api() {
- return PyCObject_FromVoidPtr(&the_api, NULL);
- }
-#ifdef __cplusplus
-extern "C" {
-#endif
-static PyObject *_wrap_open(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- char *arg0 ;
- char *arg1 = "rb" ;
- ShapeFile *result ;
-
- if(!PyArg_ParseTuple(args,"s|s:open",&arg0,&arg1)) return NULL;
- {
- result = (ShapeFile *)open_ShapeFile((char const *)arg0,(char const *)arg1);
- ;
- if (!result)
- {
- SWIG_exception(SWIG_MemoryError, "no memory");
- }
- else if (!result->handle)
- {
- SWIG_exception(SWIG_IOError, "open_ShapeFile failed");
- }
- }resultobj = SWIG_NewPointerObj((void *) result, SWIGTYPE_p_ShapeFile);
- return resultobj;
-}
-
-
-static PyObject *_wrap_create(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- char *arg0 ;
- int arg1 ;
- ShapeFile *result ;
-
- if(!PyArg_ParseTuple(args,"si:create",&arg0,&arg1)) return NULL;
- {
- result = (ShapeFile *)create_ShapeFile((char const *)arg0,arg1);
- ;
- if (!result)
- {
- SWIG_exception(SWIG_MemoryError, "no memory");
- }
- else if (!result->handle)
- {
- SWIG_exception(SWIG_IOError, "create_ShapeFile failed");
- }
- }resultobj = SWIG_NewPointerObj((void *) result, SWIGTYPE_p_ShapeFile);
- return resultobj;
-}
-
-
-static PyObject *_wrap_c_api(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- PyObject *result ;
-
- if(!PyArg_ParseTuple(args,":c_api")) return NULL;
- result = (PyObject *)c_api();
- {
- resultobj = result;
- }
- return resultobj;
-}
-
-
-static PyObject *_wrap_type_name(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- int arg0 ;
- char *result ;
-
- if(!PyArg_ParseTuple(args,"i:type_name",&arg0)) return NULL;
- result = (char *)SHPTypeName(arg0);
- resultobj = PyString_FromString(result);
- return resultobj;
-}
-
-
-static PyObject *_wrap_part_type_name(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- int arg0 ;
- char *result ;
-
- if(!PyArg_ParseTuple(args,"i:part_type_name",&arg0)) return NULL;
- result = (char *)SHPPartTypeName(arg0);
- resultobj = PyString_FromString(result);
- return resultobj;
-}
-
-
-static PyObject *_wrap_SHPObject_type_get(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- SHPObject *arg0 ;
- PyObject * argo0 =0 ;
- int result ;
-
- if(!PyArg_ParseTuple(args,"O:SHPObject_type_get",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_SHPObject,1)) == -1) return NULL;
- result = (int ) (arg0->nSHPType);
- resultobj = PyInt_FromLong((long)result);
- return resultobj;
-}
-
-
-static PyObject *_wrap_SHPObject_id_get(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- SHPObject *arg0 ;
- PyObject * argo0 =0 ;
- int result ;
-
- if(!PyArg_ParseTuple(args,"O:SHPObject_id_get",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_SHPObject,1)) == -1) return NULL;
- result = (int ) (arg0->nShapeId);
- resultobj = PyInt_FromLong((long)result);
- return resultobj;
-}
-
-
-static PyObject *_wrap_new_SHPObject(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- int arg0 ;
- int arg1 ;
- PyObject *arg2 ;
- PyObject *arg3 = NULL ;
- PyObject * obj2 = 0 ;
- PyObject * obj3 = 0 ;
- SHPObject *result ;
-
- if(!PyArg_ParseTuple(args,"iiO|O:new_SHPObject",&arg0,&arg1,&obj2,&obj3)) return NULL;
- {
- arg2 = obj2;
- }
- if (obj3)
- {
- arg3 = obj3;
- }
- {
- result = (SHPObject *)new_SHPObject(arg0,arg1,arg2,arg3);
- ;
- if (PyErr_Occurred())
- return NULL;
- }resultobj = SWIG_NewPointerObj((void *) result, SWIGTYPE_p_SHPObject);
- return resultobj;
-}
-
-
-static PyObject *_wrap_delete_SHPObject(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- SHPObject *arg0 ;
- PyObject * argo0 =0 ;
-
- if(!PyArg_ParseTuple(args,"O:delete_SHPObject",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_SHPObject,1)) == -1) return NULL;
- delete_SHPObject(arg0);
- Py_INCREF(Py_None);
- resultobj = Py_None;
- return resultobj;
-}
-
-
-static PyObject *_wrap_SHPObject_extents(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- SHPObject *arg0 ;
- PyObject * argo0 =0 ;
- PyObject *result ;
-
- if(!PyArg_ParseTuple(args,"O:SHPObject_extents",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_SHPObject,1)) == -1) return NULL;
- result = (PyObject *)SHPObject_extents(arg0);
- {
- resultobj = result;
- }
- return resultobj;
-}
-
-
-static PyObject *_wrap_SHPObject_vertices(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- SHPObject *arg0 ;
- PyObject * argo0 =0 ;
- PyObject *result ;
-
- if(!PyArg_ParseTuple(args,"O:SHPObject_vertices",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_SHPObject,1)) == -1) return NULL;
- result = (PyObject *)SHPObject_vertices(arg0);
- {
- resultobj = result;
- }
- return resultobj;
-}
-
-
-ShapeFile * new_ShapeFile(char *file,char *mode) {
- {
- ShapeFile * self = malloc(sizeof(ShapeFile));
- if (self)
- self->handle = SHPOpen(file, mode);
- return self;
- }
-}
-
-
-static PyObject *_wrap_new_ShapeFile(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- char *arg0 ;
- char *arg1 = "rb" ;
- ShapeFile *result ;
-
- if(!PyArg_ParseTuple(args,"s|s:new_ShapeFile",&arg0,&arg1)) return NULL;
- {
- result = (ShapeFile *)new_ShapeFile(arg0,arg1);
- ;
- if (!result)
- {
- SWIG_exception(SWIG_MemoryError, "no memory");
- }
- else if (!result->handle)
- {
- SWIG_exception(SWIG_IOError, "new_ShapeFile failed");
- }
- }resultobj = SWIG_NewPointerObj((void *) result, SWIGTYPE_p_ShapeFile);
- return resultobj;
-}
-
-
-void delete_ShapeFile(ShapeFile *self) {
- {
- if (self->handle)
- SHPClose(self->handle);
- free(self);
- }
-}
-
-
-static PyObject *_wrap_delete_ShapeFile(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- ShapeFile *arg0 ;
- PyObject * argo0 =0 ;
-
- if(!PyArg_ParseTuple(args,"O:delete_ShapeFile",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_ShapeFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_delete_ShapeFile
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "shapefile already closed");
- #endif
- }
- delete_ShapeFile(arg0);
- Py_INCREF(Py_None);
- resultobj = Py_None;
- return resultobj;
-}
-
-
-void ShapeFile_close(ShapeFile *self) {
- {
- if (self->handle)
- {
- SHPClose(self->handle);
- self->handle = NULL;
- }
- }
-}
-
-
-static PyObject *_wrap_ShapeFile_close(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- ShapeFile *arg0 ;
- PyObject * argo0 =0 ;
-
- if(!PyArg_ParseTuple(args,"O:ShapeFile_close",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_ShapeFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_ShapeFile_close
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "shapefile already closed");
- #endif
- }
- ShapeFile_close(arg0);
- Py_INCREF(Py_None);
- resultobj = Py_None;
- return resultobj;
-}
-
-
-void ShapeFile_info(ShapeFile *self,int *output_entities,int *output_type,double *output_min_bounds,double *output_max_bounds) {
- {
- SHPGetInfo(self->handle, output_entities, output_type,
- output_min_bounds, output_max_bounds);
- }
-}
-
-
-static PyObject *_wrap_ShapeFile_info(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- ShapeFile *arg0 ;
- int *arg1 ;
- int *arg2 ;
- double *arg3 ;
- double *arg4 ;
- int temp ;
- int temp0 ;
- double temp1[4] ;
- double temp2[4] ;
- PyObject * argo0 =0 ;
-
- {
- arg1 = &temp;
- }
- {
- arg2 = &temp0;
- }
- {
- arg3 = temp1;
- }
- {
- arg4 = temp2;
- }
- if(!PyArg_ParseTuple(args,"O:ShapeFile_info",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_ShapeFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_ShapeFile_info
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "shapefile already closed");
- #endif
- }
- ShapeFile_info(arg0,arg1,arg2,arg3,arg4);
- Py_INCREF(Py_None);
- resultobj = Py_None;
- {
- PyObject *o;
- o = PyInt_FromLong((long) (*arg1));
- resultobj = t_output_helper(resultobj, o);
- }
- {
- PyObject *o;
- o = PyInt_FromLong((long) (*arg2));
- resultobj = t_output_helper(resultobj, o);
- }
- {
- PyObject * list = Py_BuildValue("[dddd]",
- arg3[0], arg3[1],
- arg3[2], arg3[3]);
- resultobj = t_output_helper(resultobj,list);
- }
- {
- PyObject * list = Py_BuildValue("[dddd]",
- arg4[0], arg4[1],
- arg4[2], arg4[3]);
- resultobj = t_output_helper(resultobj,list);
- }
- return resultobj;
-}
-
-
-SHPObject * ShapeFile_read_object(ShapeFile *self,int i) {
- {
- return SHPReadObject(self->handle, i);
- }
-}
-
-
-static PyObject *_wrap_ShapeFile_read_object(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- ShapeFile *arg0 ;
- int arg1 ;
- PyObject * argo0 =0 ;
- SHPObject *result ;
-
- if(!PyArg_ParseTuple(args,"Oi:ShapeFile_read_object",&argo0,&arg1)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_ShapeFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_ShapeFile_read_object
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "shapefile already closed");
- #endif
- }
- result = (SHPObject *)ShapeFile_read_object(arg0,arg1);
- resultobj = SWIG_NewPointerObj((void *) result, SWIGTYPE_p_SHPObject);
- return resultobj;
-}
-
-
-int ShapeFile_write_object(ShapeFile *self,int iShape,SHPObject *psObject) {
- {
- return SHPWriteObject(self->handle, iShape, psObject);
- }
-}
-
-
-static PyObject *_wrap_ShapeFile_write_object(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- ShapeFile *arg0 ;
- int arg1 ;
- SHPObject *arg2 ;
- PyObject * argo0 =0 ;
- PyObject * argo2 =0 ;
- int result ;
-
- if(!PyArg_ParseTuple(args,"OiO:ShapeFile_write_object",&argo0,&arg1,&argo2)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_ShapeFile,1)) == -1) return NULL;
- if ((SWIG_ConvertPtr(argo2,(void **) &arg2,SWIGTYPE_p_SHPObject,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_ShapeFile_write_object
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "shapefile already closed");
- #endif
- }
- result = (int )ShapeFile_write_object(arg0,arg1,arg2);
- resultobj = PyInt_FromLong((long)result);
- return resultobj;
-}
-
-
-PyObject * ShapeFile_cobject(ShapeFile *self) {
- {
- return PyCObject_FromVoidPtr(self->handle, NULL);
- }
-}
-
-
-static PyObject *_wrap_ShapeFile_cobject(PyObject *self, PyObject *args) {
- PyObject *resultobj;
- ShapeFile *arg0 ;
- PyObject * argo0 =0 ;
- PyObject *result ;
-
- if(!PyArg_ParseTuple(args,"O:ShapeFile_cobject",&argo0)) return NULL;
- if ((SWIG_ConvertPtr(argo0,(void **) &arg0,SWIGTYPE_p_ShapeFile,1)) == -1) return NULL;
- {
- #ifndef NOCHECK_ShapeFile_cobject
- if (!arg0 || !arg0->handle)
- SWIG_exception(SWIG_TypeError, "shapefile already closed");
- #endif
- }
- result = (PyObject *)ShapeFile_cobject(arg0);
- {
- resultobj = result;
- }
- return resultobj;
-}
-
-
-static PyMethodDef shapelibcMethods[] = {
- { "open", _wrap_open, METH_VARARGS },
- { "create", _wrap_create, METH_VARARGS },
- { "c_api", _wrap_c_api, METH_VARARGS },
- { "type_name", _wrap_type_name, METH_VARARGS },
- { "part_type_name", _wrap_part_type_name, METH_VARARGS },
- { "SHPObject_type_get", _wrap_SHPObject_type_get, METH_VARARGS },
- { "SHPObject_id_get", _wrap_SHPObject_id_get, METH_VARARGS },
- { "new_SHPObject", _wrap_new_SHPObject, METH_VARARGS },
- { "delete_SHPObject", _wrap_delete_SHPObject, METH_VARARGS },
- { "SHPObject_extents", _wrap_SHPObject_extents, METH_VARARGS },
- { "SHPObject_vertices", _wrap_SHPObject_vertices, METH_VARARGS },
- { "new_ShapeFile", _wrap_new_ShapeFile, METH_VARARGS },
- { "delete_ShapeFile", _wrap_delete_ShapeFile, METH_VARARGS },
- { "ShapeFile_close", _wrap_ShapeFile_close, METH_VARARGS },
- { "ShapeFile_info", _wrap_ShapeFile_info, METH_VARARGS },
- { "ShapeFile_read_object", _wrap_ShapeFile_read_object, METH_VARARGS },
- { "ShapeFile_write_object", _wrap_ShapeFile_write_object, METH_VARARGS },
- { "ShapeFile_cobject", _wrap_ShapeFile_cobject, METH_VARARGS },
- { NULL, NULL }
-};
-
-#ifdef __cplusplus
-}
-#endif
-
-/* -------- TYPE CONVERSION AND EQUIVALENCE RULES (BEGIN) -------- */
-
-static swig_type_info _swigt__p_ShapeFile[] = {{"_p_ShapeFile", 0, "ShapeFile *"},{"_p_ShapeFile"},{0}};
-static swig_type_info _swigt__p_SHPObject[] = {{"_p_SHPObject", 0, "SHPObject *"},{"_p_SHPObject"},{0}};
-
-static swig_type_info *swig_types_initial[] = {
-_swigt__p_ShapeFile,
-_swigt__p_SHPObject,
-0
-};
-
-
-/* -------- TYPE CONVERSION AND EQUIVALENCE RULES (END) -------- */
-
-static swig_const_info swig_const_table[] = {
- { SWIG_PY_INT, "SHPT_NULL", (long) 0, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_POINT", (long) 1, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_ARC", (long) 3, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_POLYGON", (long) 5, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_MULTIPOINT", (long) 8, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_POINTZ", (long) 11, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_ARCZ", (long) 13, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_POLYGONZ", (long) 15, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_MULTIPOINTZ", (long) 18, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_POINTM", (long) 21, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_ARCM", (long) 23, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_POLYGONM", (long) 25, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_MULTIPOINTM", (long) 28, 0, 0, 0},
- { SWIG_PY_INT, "SHPT_MULTIPATCH", (long) 31, 0, 0, 0},
- { SWIG_PY_INT, "SHPP_TRISTRIP", (long) 0, 0, 0, 0},
- { SWIG_PY_INT, "SHPP_TRIFAN", (long) 1, 0, 0, 0},
- { SWIG_PY_INT, "SHPP_OUTERRING", (long) 2, 0, 0, 0},
- { SWIG_PY_INT, "SHPP_INNERRING", (long) 3, 0, 0, 0},
- { SWIG_PY_INT, "SHPP_FIRSTRING", (long) 4, 0, 0, 0},
- { SWIG_PY_INT, "SHPP_RING", (long) 5, 0, 0, 0},
-{0}};
-
-static PyObject *SWIG_globals;
-#ifdef __cplusplus
-extern "C"
-#endif
-SWIGEXPORT(void) initshapelibc(void) {
- PyObject *m, *d;
- int i;
- SWIG_globals = SWIG_newvarlink();
- m = Py_InitModule("shapelibc", shapelibcMethods);
- d = PyModule_GetDict(m);
- for (i = 0; swig_types_initial[i]; i++) {
- swig_types[i] = SWIG_TypeRegister(swig_types_initial[i]);
- }
- SWIG_InstallConstants(d,swig_const_table);
-}
-
Copied: trunk/thuban/libraries/pyshapelib/shapelibmodule.c (from rev 2888, branches/WIP-pyshapelib-Unicode/thuban/libraries/pyshapelib/shapelibmodule.c)
Modified: trunk/thuban/libraries/pyshapelib/shptreemodule.c
===================================================================
--- trunk/thuban/libraries/pyshapelib/shptreemodule.c 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/shptreemodule.c 2009-09-27 20:36:21 UTC (rev 2889)
@@ -50,7 +50,7 @@
shptree_repr(SHPTreeObject * self)
{
char buf[1000];
- sprintf(buf, "<SHPTree at %xul>", (unsigned long)self);
+ sprintf(buf, "<SHPTree at %p>", self);
return PyString_FromString(buf);
}
@@ -166,8 +166,7 @@
};
-void
-initshptree()
+PyMODINIT_FUNC initshptree(void)
{
SHPTreeType.ob_type = &PyType_Type;
Modified: trunk/thuban/libraries/pyshapelib/testdbf.py
===================================================================
--- trunk/thuban/libraries/pyshapelib/testdbf.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/pyshapelib/testdbf.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -12,17 +12,99 @@
# $Id$
import unittest
-import dbflib
+import os
+import shutil
class TestDBF(unittest.TestCase):
- def test_add_field(self):
- """Test whethe add_field reports exceptions"""
- dbf = dbflib.create("test.dbf")
- # For strings the precision parameter must be 0
- self.assertRaises(RuntimeError,
- dbf.add_field, "str", dbflib.FTString, 10, 5)
+ def setUp(self):
+ self.testdir = "testdbf-dir"
+ if os.path.exists(self.testdir):
+ shutil.rmtree(self.testdir)
+ os.mkdir(self.testdir)
+ self.testpath = os.path.join(self.testdir, "test")
+
+ import dbflib
+ self.fields = [
+ ("NAME", dbflib.FTString, 20, 0),
+ ("INT", dbflib.FTInteger, 10, 0),
+ ("FLOAT", dbflib.FTDouble, 10, 4),
+ ("BOOL", dbflib.FTLogical, 1, 0)
+ ]
+ self.records = [
+ ('Weatherwax', 1, 3.1415926535, True),
+ ('Ogg', 2, -1000.1234, False),
+ (u'x\u03C0\u03C1\u03C2', 10, 0, 1),
+ ]
+ def test_add_field(self):
+ '''Test whethe add_field reports exceptions'''
+ import dbflib
+ dbf = dbflib.create(self.testpath)
+ # For strings the precision parameter must be 0
+ #self.assertRaises(RuntimeError, dbf.add_field, "str", dbflib.FTString, 10, 5)
+ def test_dbflib_flow(self):
+ self.__make_dbf()
+ self.__read_dbf()
+
+ def __make_dbf(self):
+ import dbflib
+ # create a new dbf file and add three fields.
+ dbf = dbflib.create(self.testpath, code_page=dbflib.CPG_UTF_8, return_unicode=True)
+ for name, type, width, decimals in self.fields:
+ dbf.add_field(name, type, width, decimals)
+
+ # Records can be added as a dictionary...
+ keys = [field[0] for field in self.fields]
+ dbf.write_record(0, dict(zip(keys, self.records[0])))
+
+ # ... or as a sequence
+ dbf.write_record(1, self.records[1])
+
+ # ... or as individual attributes
+ for i, value in enumerate(self.records[2]):
+ dbf.write_attribute(2, i, value)
+
+ dbf.close()
+
+ def __read_dbf(self):
+ import dbflib
+ dbf = dbflib.DBFFile(self.testpath, return_unicode=True)
+ # test the fields
+ self.assertEqual(dbf.field_count(), len(self.fields))
+ for i in range(dbf.field_count()):
+ type, name, width, decimals = dbf.field_info(i)
+ self.assertEqual((name, type, width, decimals), self.fields[i])
+
+ # try to read individual attributes (one field within a record)
+ self.assertEqual(dbf.record_count(), len(self.records))
+ for i in range(dbf.record_count()):
+ for k in range(dbf.field_count()):
+ self.__assertEqual(dbf.read_attribute(i, k), self.records[i][k])
+
+ # try to read complete records (they are returned as dictionaries)
+ keys = zip(*self.fields)[0]
+ for i in range(dbf.record_count()):
+ rec = dbf.read_record(i)
+ self.assert_(isinstance(rec, dict))
+ for k, key in enumerate(keys):
+ self.__assertEqual(rec[key], self.records[i][k])
+
+ # try to read past bounds
+ self.assertRaises(IndexError, dbf.read_record, -1)
+ self.assertRaises(IndexError, dbf.read_record, dbf.record_count())
+ self.assertRaises(IndexError, dbf.read_attribute, 0, -1)
+ self.assertRaises(IndexError, dbf.read_attribute, -1, 0)
+ self.assertRaises(IndexError, dbf.read_attribute, dbf.record_count(), 0)
+ self.assertRaises(IndexError, dbf.read_attribute, 0, dbf.field_count())
+
+ def __assertEqual(self, a, b, msg=None):
+ if isinstance(a, float):
+ self.assertAlmostEqual(a, b, 4, msg)
+ else:
+ self.assertEqual(a, b, msg)
+
+
if __name__ == "__main__":
unittest.main()
Modified: trunk/thuban/libraries/shapelib/dbfopen.c
===================================================================
--- trunk/thuban/libraries/shapelib/dbfopen.c 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/shapelib/dbfopen.c 2009-09-27 20:36:21 UTC (rev 2889)
@@ -33,212 +33,111 @@
* DEALINGS IN THE SOFTWARE.
******************************************************************************
*
- * $Log$
- * Revision 1.3 2004/05/17 15:47:57 bh
- * Update to newest shapelib and get rid of Thuban specific extensions,
- * i.e. use the new DBFUpdateHeader instead of our DBFCommit kludge
+ * $Log: dbfopen.c,v $
+ * Revision 1.83 2008-11-12 14:28:15 fwarmerdam
+ * DBFCreateField() now works on files with records
*
- * * libraries/shapelib/shpopen.c: Update to version from current
- * shapelib CVS.
+ * Revision 1.82 2008/11/11 17:47:09 fwarmerdam
+ * added DBFDeleteField() function
*
- * * libraries/shapelib/shapefil.h: Update to version from current
- * shapelib CVS.
+ * Revision 1.81 2008/01/03 17:48:13 bram
+ * in DBFCreate, use default code page LDID/87 (= 0x57, ANSI)
+ * instead of LDID/3. This seems to be the same as what ESRI
+ * would be doing by default.
*
- * * libraries/shapelib/dbfopen.c: Update to version from current
- * shapelib CVS.
- * (DBFCommit): Effectively removed since shapelib itself has
- * DBFUpdateHeader now which is better for what DBFCommit wanted to
- * achieve.
- * We're now using an unmodified version of dbfopen.
+ * Revision 1.80 2007/12/30 14:36:39 fwarmerdam
+ * avoid syntax issue with last comment.
*
- * * libraries/pyshapelib/dbflib_wrap.c, libraries/pyshapelib/dbflib.py:
- * Update from dbflib.i
+ * Revision 1.79 2007/12/30 14:35:48 fwarmerdam
+ * Avoid char* / unsigned char* warnings.
*
- * * libraries/pyshapelib/dbflib.i (DBFInfo_commit): New. Implementation of
- * the commit method. This new indirection is necessary because we use the
- * DBFUpdateHeader function now which is not available in shapelib <=
- * 1.2.10
- * (DBFFile::commit): Use DBFInfo_commit as implementation
- * (pragma __class__): New. Kludge to remove the commit method when
- * the DBFUpdateHeader function isn't available
- * (_have_commit): New. Helper for the pragma kludge.
+ * Revision 1.78 2007/12/18 18:28:07 bram
+ * - create hook for client specific atof (bugzilla ticket 1615)
+ * - check for NULL handle before closing cpCPG file, and close after reading.
*
- * * libraries/pyshapelib/setup.py (dbf_macros): New. Return the
- * preprocessor macros needed to compile the dbflib wrapper. Determine
- * whether DBFUpdateHeader is available and define the right value of
- * HAVE_UPDATE_HEADER
- * (extensions): Use dbf_macros for the dbflibc extension
+ * Revision 1.77 2007/12/15 20:25:21 bram
+ * dbfopen.c now reads the Code Page information from the DBF file, and exports
+ * this information as a string through the DBFGetCodePage function. This is
+ * either the number from the LDID header field ("LDID/<number>") or as the
+ * content of an accompanying .CPG file. When creating a DBF file, the code can
+ * be set using DBFCreateEx.
*
- * * setup.py (extensions): Add the HAVE_UPDATE_HEADER macro with
- * value '1' to the Lib.dbflibc extension. This simply reflects the
- * shapelib and pyshapelib updates
+ * Revision 1.76 2007/12/12 22:21:32 bram
+ * DBFClose: check for NULL psDBF handle before trying to close it.
*
- * Revision 1.53 2003/12/29 00:00:30 fwarmerdam
- * mark DBFWriteAttributeDirectly as SHPAPI_CALL
+ * Revision 1.75 2007/12/06 13:58:19 fwarmerdam
+ * make sure file offset calculations are done in as SAOffset
*
- * Revision 1.52 2003/07/08 15:20:03 warmerda
- * avoid warnings about downcasting to unsigned char
+ * Revision 1.74 2007/12/06 07:00:25 fwarmerdam
+ * dbfopen now using SAHooks for fileio
*
- * Revision 1.51 2003/07/08 13:50:15 warmerda
- * DBFIsAttributeNULL check for pszValue==NULL - bug 360
+ * Revision 1.73 2007/09/03 19:48:11 fwarmerdam
+ * move DBFReadAttribute() static dDoubleField into dbfinfo
*
- * Revision 1.50 2003/04/21 18:58:25 warmerda
- * ensure current record is flushed at same time as header is updated
+ * Revision 1.72 2007/09/03 19:34:06 fwarmerdam
+ * Avoid use of static tuple buffer in DBFReadTuple()
*
- * Revision 1.49 2003/04/21 18:30:37 warmerda
- * added header write/update public methods
+ * Revision 1.71 2006/06/22 14:37:18 fwarmerdam
+ * avoid memory leak if dbfopen fread fails
*
- * Revision 1.48 2003/03/10 14:51:27 warmerda
- * DBFWrite* calls now return FALSE if they have to truncate
+ * Revision 1.70 2006/06/17 17:47:05 fwarmerdam
+ * use calloc() for dbfinfo in DBFCreate
*
- * Revision 1.47 2002/11/20 03:32:22 warmerda
- * Ensure field name in DBFGetFieldIndex() is properly terminated.
+ * Revision 1.69 2006/06/17 15:34:32 fwarmerdam
+ * disallow creating fields wider than 255
*
- * Revision 1.46 2002/10/09 13:10:21 warmerda
- * Added check that width is positive.
+ * Revision 1.68 2006/06/17 15:12:40 fwarmerdam
+ * Fixed C++ style comments.
*
- * Revision 1.45 2002/09/29 00:00:08 warmerda
- * added FTLogical and logical attribute read/write calls
+ * Revision 1.67 2006/06/17 00:24:53 fwarmerdam
+ * Don't treat non-zero decimals values as high order byte for length
+ * for strings. It causes serious corruption for some files.
+ * http://bugzilla.remotesensing.org/show_bug.cgi?id=1202
*
- * Revision 1.44 2002/05/07 13:46:11 warmerda
- * Added DBFWriteAttributeDirectly().
+ * Revision 1.66 2006/03/29 18:26:20 fwarmerdam
+ * fixed bug with size of pachfieldtype in dbfcloneempty
*
- * Revision 1.43 2002/02/13 19:39:21 warmerda
- * Fix casting issues in DBFCloneEmpty().
+ * Revision 1.65 2006/02/15 01:14:30 fwarmerdam
+ * added DBFAddNativeFieldType
*
- * Revision 1.42 2002/01/15 14:36:07 warmerda
- * updated email address
+ * Revision 1.64 2006/02/09 00:29:04 fwarmerdam
+ * Changed to put spaces into string fields that are NULL as
+ * per http://bugzilla.maptools.org/show_bug.cgi?id=316.
*
- * Revision 1.41 2002/01/15 14:31:49 warmerda
- * compute rather than copying nHeaderLength in DBFCloneEmpty()
+ * Revision 1.63 2006/01/25 15:35:43 fwarmerdam
+ * check success on DBFFlushRecord
*
- * Revision 1.40 2002/01/09 04:32:35 warmerda
- * fixed to read correct amount of header
+ * Revision 1.62 2006/01/10 16:28:03 fwarmerdam
+ * Fixed typo in CPLError.
*
- * Revision 1.39 2001/12/11 22:41:03 warmerda
- * improve io related error checking when reading header
+ * Revision 1.61 2006/01/10 16:26:29 fwarmerdam
+ * Push loading record buffer into DBFLoadRecord.
+ * Implement CPL error reporting if USE_CPL defined.
*
- * Revision 1.38 2001/11/28 16:07:31 warmerda
- * Cleanup to avoid compiler warnings as suggested by Richard Hash.
+ * Revision 1.60 2006/01/05 01:27:27 fwarmerdam
+ * added dbf deletion mark/fetch
*
- * Revision 1.37 2001/07/04 05:18:09 warmerda
- * do last fix properly
+ * Revision 1.59 2005/03/14 15:20:28 fwarmerdam
+ * Fixed last change.
*
- * Revision 1.36 2001/07/04 05:16:09 warmerda
- * fixed fieldname comparison in DBFGetFieldIndex
+ * Revision 1.58 2005/03/14 15:18:54 fwarmerdam
+ * Treat very wide fields with no decimals as double. This is
+ * more than 32bit integer fields.
*
- * Revision 1.35 2001/06/22 02:10:06 warmerda
- * fixed NULL shape support with help from Jim Matthews
+ * Revision 1.57 2005/02/10 20:16:54 fwarmerdam
+ * Make the pszStringField buffer for DBFReadAttribute() static char [256]
+ * as per bug 306.
*
- * Revision 1.33 2001/05/31 19:20:13 warmerda
- * added DBFGetFieldIndex()
+ * Revision 1.56 2005/02/10 20:07:56 fwarmerdam
+ * Fixed bug 305 in DBFCloneEmpty() - header length problem.
*
- * Revision 1.32 2001/05/31 18:15:40 warmerda
- * Added support for NULL fields in DBF files
+ * Revision 1.55 2004/09/26 20:23:46 fwarmerdam
+ * avoid warnings with rcsid and signed/unsigned stuff
*
- * Revision 1.31 2001/05/23 13:36:52 warmerda
- * added use of SHPAPI_CALL
- *
- * Revision 1.30 2000/12/05 14:43:38 warmerda
- * DBReadAttribute() white space trimming bug fix
- *
- * Revision 1.29 2000/10/05 14:36:44 warmerda
- * fix bug with writing very wide numeric fields
- *
- * Revision 1.28 2000/09/25 14:18:07 warmerda
- * Added some casts of strlen() return result to fix warnings on some
- * systems, as submitted by Daniel.
- *
- * Revision 1.27 2000/09/25 14:15:51 warmerda
- * added DBFGetNativeFieldType()
- *
- * Revision 1.26 2000/07/07 13:39:45 warmerda
- * removed unused variables, and added system include files
- *
- * Revision 1.25 2000/05/29 18:19:13 warmerda
- * avoid use of uchar, and adding casting fix
- *
- * Revision 1.24 2000/05/23 13:38:27 warmerda
- * Added error checks on return results of fread() and fseek().
- *
- * Revision 1.23 2000/05/23 13:25:49 warmerda
- * Avoid crashing if field or record are out of range in dbfread*attribute().
- *
- * Revision 1.22 1999/12/15 13:47:24 warmerda
- * Added stdlib.h to ensure that atof() is prototyped.
- *
- * Revision 1.21 1999/12/13 17:25:46 warmerda
- * Added support for upper case .DBF extention.
- *
- * Revision 1.20 1999/11/30 16:32:11 warmerda
- * Use atof() instead of sscanf().
- *
- * Revision 1.19 1999/11/05 14:12:04 warmerda
- * updated license terms
- *
- * Revision 1.18 1999/07/27 00:53:28 warmerda
- * ensure that whole old field value clear on write of string
- *
- * Revision 1.1 1999/07/05 18:58:07 warmerda
- * New
- *
- * Revision 1.17 1999/06/11 19:14:12 warmerda
- * Fixed some memory leaks.
- *
- * Revision 1.16 1999/06/11 19:04:11 warmerda
- * Remoted some unused variables.
- *
- * Revision 1.15 1999/05/11 03:19:28 warmerda
- * added new Tuple api, and improved extension handling - add from candrsn
- *
- * Revision 1.14 1999/05/04 15:01:48 warmerda
- * Added 'F' support.
- *
- * Revision 1.13 1999/03/23 17:38:59 warmerda
- * DBFAddField() now actually does return the new field number, or -1 if
- * it fails.
- *
- * Revision 1.12 1999/03/06 02:54:46 warmerda
- * Added logic to convert shapefile name to dbf filename in DBFOpen()
- * for convenience.
- *
- * Revision 1.11 1998/12/31 15:30:34 warmerda
- * Improved the interchangability of numeric and string attributes. Add
- * white space trimming option for attributes.
- *
- * Revision 1.10 1998/12/03 16:36:44 warmerda
- * Use r+b instead of rb+ for binary access.
- *
- * Revision 1.9 1998/12/03 15:34:23 warmerda
- * Updated copyright message.
- *
- * Revision 1.8 1997/12/04 15:40:15 warmerda
- * Added newline character after field definitions.
- *
- * Revision 1.7 1997/03/06 14:02:10 warmerda
- * Ensure bUpdated is initialized.
- *
- * Revision 1.6 1996/02/12 04:54:41 warmerda
- * Ensure that DBFWriteAttribute() returns TRUE if it succeeds.
- *
- * Revision 1.5 1995/10/21 03:15:12 warmerda
- * Changed to use binary file access, and ensure that the
- * field name field is zero filled, and limited to 10 chars.
- *
- * Revision 1.4 1995/08/24 18:10:42 warmerda
- * Added use of SfRealloc() to avoid pre-ANSI realloc() functions such
- * as on the Sun.
- *
- * Revision 1.3 1995/08/04 03:15:16 warmerda
- * Fixed up header.
- *
- * Revision 1.2 1995/08/04 03:14:43 warmerda
- * Added header.
+ * Revision 1.54 2004/09/15 16:26:10 fwarmerdam
+ * Treat all blank numeric fields as null too.
*/
-static char rcsid[] =
- "$Id$";
-
#include "shapefil.h"
#include <math.h>
@@ -246,39 +145,14 @@
#include <ctype.h>
#include <string.h>
+SHP_CVSID("$Id$")
+
#ifndef FALSE
# define FALSE 0
# define TRUE 1
#endif
-static int nStringFieldLen = 0;
-static char * pszStringField = NULL;
-
/************************************************************************/
-/* DBFSet_atof_function() */
-/* */
-/* This makes it possible to initialise a different atof() function */
-/* which might be necessary because the standard atof() might be */
-/* sensitive to locale settings. */
-/* */
-/* If the calling application uses a locale with different decimal_point*/
-/* it should better also give us a locale agnostic atof() function. */
-/* */
-/* As far as I can see from Python PEP331 and GNU libc documentation */
-/* there is no standard for such a function yet. */
-/* */
-/* bernhard.reiter at intevation.de 20060924 */
-/************************************************************************/
-
-static double (* atof_function)(const char *nptr) = &atof;
-
-void SHPAPI_CALL
- DBFSetatof_function( double (* new_atof_function)(const char *nptr))
-{
- atof_function = new_atof_function;
-}
-
-/************************************************************************/
/* SfRealloc() */
/* */
/* A realloc cover function that will access a NULL pointer as */
@@ -335,13 +209,16 @@
abyHeader[10] = (unsigned char) (psDBF->nRecordLength % 256);
abyHeader[11] = (unsigned char) (psDBF->nRecordLength / 256);
+ abyHeader[29] = (unsigned char) (psDBF->iLanguageDriver);
+
/* -------------------------------------------------------------------- */
/* Write the initial 32 byte file header, and all the field */
/* descriptions. */
/* -------------------------------------------------------------------- */
- fseek( psDBF->fp, 0, 0 );
- fwrite( abyHeader, XBASE_FLDHDR_SZ, 1, psDBF->fp );
- fwrite( psDBF->pszHeader, XBASE_FLDHDR_SZ, psDBF->nFields, psDBF->fp );
+ psDBF->sHooks.FSeek( psDBF->fp, 0, 0 );
+ psDBF->sHooks.FWrite( abyHeader, XBASE_FLDHDR_SZ, 1, psDBF->fp );
+ psDBF->sHooks.FWrite( psDBF->pszHeader, XBASE_FLDHDR_SZ, psDBF->nFields,
+ psDBF->fp );
/* -------------------------------------------------------------------- */
/* Write out the newline character if there is room for it. */
@@ -351,7 +228,7 @@
char cNewline;
cNewline = 0x0d;
- fwrite( &cNewline, 1, 1, psDBF->fp );
+ psDBF->sHooks.FWrite( &cNewline, 1, 1, psDBF->fp );
}
}
@@ -361,24 +238,90 @@
/* Write out the current record if there is one. */
/************************************************************************/
-static void DBFFlushRecord( DBFHandle psDBF )
+static int DBFFlushRecord( DBFHandle psDBF )
{
- int nRecordOffset;
+ SAOffset nRecordOffset;
if( psDBF->bCurrentRecordModified && psDBF->nCurrentRecord > -1 )
{
psDBF->bCurrentRecordModified = FALSE;
- nRecordOffset = psDBF->nRecordLength * psDBF->nCurrentRecord
- + psDBF->nHeaderLength;
+ nRecordOffset =
+ psDBF->nRecordLength * (SAOffset) psDBF->nCurrentRecord
+ + psDBF->nHeaderLength;
- fseek( psDBF->fp, nRecordOffset, 0 );
- fwrite( psDBF->pszCurrentRecord, psDBF->nRecordLength, 1, psDBF->fp );
+ if( psDBF->sHooks.FSeek( psDBF->fp, nRecordOffset, 0 ) != 0
+ || psDBF->sHooks.FWrite( psDBF->pszCurrentRecord,
+ psDBF->nRecordLength,
+ 1, psDBF->fp ) != 1 )
+ {
+#ifdef USE_CPL
+ CPLError( CE_Failure, CPLE_FileIO,
+ "Failure writing DBF record %d.",
+ psDBF->nCurrentRecord );
+#else
+ fprintf( stderr, "Failure writing DBF record %d.",
+ psDBF->nCurrentRecord );
+#endif
+ return FALSE;
+ }
}
+
+ return TRUE;
}
/************************************************************************/
+/* DBFLoadRecord() */
+/************************************************************************/
+
+static int DBFLoadRecord( DBFHandle psDBF, int iRecord )
+
+{
+ if( psDBF->nCurrentRecord != iRecord )
+ {
+ SAOffset nRecordOffset;
+
+ if( !DBFFlushRecord( psDBF ) )
+ return FALSE;
+
+ nRecordOffset =
+ psDBF->nRecordLength * (SAOffset) iRecord + psDBF->nHeaderLength;
+
+ if( psDBF->sHooks.FSeek( psDBF->fp, nRecordOffset, SEEK_SET ) != 0 )
+ {
+#ifdef USE_CPL
+ CPLError( CE_Failure, CPLE_FileIO,
+ "fseek(%ld) failed on DBF file.\n",
+ (long) nRecordOffset );
+#else
+ fprintf( stderr, "fseek(%ld) failed on DBF file.\n",
+ (long) nRecordOffset );
+#endif
+ return FALSE;
+ }
+
+ if( psDBF->sHooks.FRead( psDBF->pszCurrentRecord,
+ psDBF->nRecordLength, 1, psDBF->fp ) != 1 )
+ {
+#ifdef USE_CPL
+ CPLError( CE_Failure, CPLE_FileIO,
+ "fread(%d) failed on DBF file.\n",
+ psDBF->nRecordLength );
+#else
+ fprintf( stderr, "fread(%d) failed on DBF file.\n",
+ psDBF->nRecordLength );
+#endif
+ return FALSE;
+ }
+
+ psDBF->nCurrentRecord = iRecord;
+ }
+
+ return TRUE;
+}
+
+/************************************************************************/
/* DBFUpdateHeader() */
/************************************************************************/
@@ -393,18 +336,18 @@
DBFFlushRecord( psDBF );
- fseek( psDBF->fp, 0, 0 );
- fread( abyFileHeader, 32, 1, psDBF->fp );
+ psDBF->sHooks.FSeek( psDBF->fp, 0, 0 );
+ psDBF->sHooks.FRead( abyFileHeader, 32, 1, psDBF->fp );
abyFileHeader[4] = (unsigned char) (psDBF->nRecords % 256);
abyFileHeader[5] = (unsigned char) ((psDBF->nRecords/256) % 256);
abyFileHeader[6] = (unsigned char) ((psDBF->nRecords/(256*256)) % 256);
abyFileHeader[7] = (unsigned char) ((psDBF->nRecords/(256*256*256)) % 256);
- fseek( psDBF->fp, 0, 0 );
- fwrite( abyFileHeader, 32, 1, psDBF->fp );
+ psDBF->sHooks.FSeek( psDBF->fp, 0, 0 );
+ psDBF->sHooks.FWrite( abyFileHeader, 32, 1, psDBF->fp );
- fflush( psDBF->fp );
+ psDBF->sHooks.FFlush( psDBF->fp );
}
/************************************************************************/
@@ -417,10 +360,29 @@
DBFOpen( const char * pszFilename, const char * pszAccess )
{
+ SAHooks sHooks;
+
+ SASetupDefaultHooks( &sHooks );
+
+ return DBFOpenLL( pszFilename, pszAccess, &sHooks );
+}
+
+/************************************************************************/
+/* DBFOpen() */
+/* */
+/* Open a .dbf file. */
+/************************************************************************/
+
+DBFHandle SHPAPI_CALL
+DBFOpenLL( const char * pszFilename, const char * pszAccess, SAHooks *psHooks )
+
+{
DBFHandle psDBF;
- unsigned char *pabyBuf;
- int nFields, nHeadLen, nRecLen, iField, i;
+ SAFile pfCPG;
+ unsigned char *pabyBuf;
+ int nFields, nHeadLen, iField, i;
char *pszBasename, *pszFullname;
+ int nBufSize = 500;
/* -------------------------------------------------------------------- */
/* We only allow the access strings "rb" and "r+". */
@@ -454,20 +416,30 @@
sprintf( pszFullname, "%s.dbf", pszBasename );
psDBF = (DBFHandle) calloc( 1, sizeof(DBFInfo) );
- psDBF->fp = fopen( pszFullname, pszAccess );
+ psDBF->fp = psHooks->FOpen( pszFullname, pszAccess );
+ memcpy( &(psDBF->sHooks), psHooks, sizeof(SAHooks) );
if( psDBF->fp == NULL )
{
sprintf( pszFullname, "%s.DBF", pszBasename );
- psDBF->fp = fopen(pszFullname, pszAccess );
+ psDBF->fp = psDBF->sHooks.FOpen(pszFullname, pszAccess );
}
-
+
+ sprintf( pszFullname, "%s.cpg", pszBasename );
+ pfCPG = psHooks->FOpen( pszFullname, "r" );
+ if( pfCPG == NULL )
+ {
+ sprintf( pszFullname, "%s.CPG", pszBasename );
+ pfCPG = psHooks->FOpen( pszFullname, "r" );
+ }
+
free( pszBasename );
free( pszFullname );
if( psDBF->fp == NULL )
{
free( psDBF );
+ if( pfCPG ) psHooks->FClose( pfCPG );
return( NULL );
}
@@ -478,10 +450,11 @@
/* -------------------------------------------------------------------- */
/* Read Table Header info */
/* -------------------------------------------------------------------- */
- pabyBuf = (unsigned char *) malloc(500);
- if( fread( pabyBuf, 32, 1, psDBF->fp ) != 1 )
+ pabyBuf = (unsigned char *) malloc(nBufSize);
+ if( psDBF->sHooks.FRead( pabyBuf, 32, 1, psDBF->fp ) != 1 )
{
- fclose( psDBF->fp );
+ psDBF->sHooks.FClose( psDBF->fp );
+ if( pfCPG ) psDBF->sHooks.FClose( pfCPG );
free( pabyBuf );
free( psDBF );
return NULL;
@@ -491,24 +464,53 @@
pabyBuf[4] + pabyBuf[5]*256 + pabyBuf[6]*256*256 + pabyBuf[7]*256*256*256;
psDBF->nHeaderLength = nHeadLen = pabyBuf[8] + pabyBuf[9]*256;
- psDBF->nRecordLength = nRecLen = pabyBuf[10] + pabyBuf[11]*256;
-
+ psDBF->nRecordLength = pabyBuf[10] + pabyBuf[11]*256;
+ psDBF->iLanguageDriver = pabyBuf[29];
+
psDBF->nFields = nFields = (nHeadLen - 32) / 32;
- psDBF->pszCurrentRecord = (char *) malloc(nRecLen);
+ psDBF->pszCurrentRecord = (char *) malloc(psDBF->nRecordLength);
/* -------------------------------------------------------------------- */
+/* Figure out the code page from the LDID and CPG */
+/* -------------------------------------------------------------------- */
+
+ psDBF->pszCodePage = NULL;
+ if( pfCPG )
+ {
+ size_t n;
+ char *buffer = (char *) pabyBuf;
+ buffer[0] = '\0';
+ psDBF->sHooks.FRead( pabyBuf, nBufSize - 1, 1, pfCPG );
+ n = strcspn( (char *) pabyBuf, "\n\r" );
+ if( n > 0 )
+ {
+ pabyBuf[n] = '\0';
+ psDBF->pszCodePage = (char *) malloc(n + 1);
+ memcpy( psDBF->pszCodePage, pabyBuf, n + 1 );
+ }
+ psDBF->sHooks.FClose( pfCPG );
+ }
+ if( psDBF->pszCodePage == NULL && pabyBuf[29] != 0 )
+ {
+ sprintf( (char *) pabyBuf, "LDID/%d", psDBF->iLanguageDriver );
+ psDBF->pszCodePage = (char *) malloc(strlen((char*)pabyBuf) + 1);
+ strcpy( psDBF->pszCodePage, (char *) pabyBuf );
+ }
+
+/* -------------------------------------------------------------------- */
/* Read in Field Definitions */
/* -------------------------------------------------------------------- */
pabyBuf = (unsigned char *) SfRealloc(pabyBuf,nHeadLen);
psDBF->pszHeader = (char *) pabyBuf;
- fseek( psDBF->fp, 32, 0 );
- if( fread( pabyBuf, nHeadLen-32, 1, psDBF->fp ) != 1 )
+ psDBF->sHooks.FSeek( psDBF->fp, 32, 0 );
+ if( psDBF->sHooks.FRead( pabyBuf, nHeadLen-32, 1, psDBF->fp ) != 1 )
{
- fclose( psDBF->fp );
+ psDBF->sHooks.FClose( psDBF->fp );
free( pabyBuf );
+ free( psDBF->pszCurrentRecord );
free( psDBF );
return NULL;
}
@@ -531,8 +533,17 @@
}
else
{
+ psDBF->panFieldSize[iField] = pabyFInfo[16];
+ psDBF->panFieldDecimals[iField] = 0;
+
+/*
+** The following seemed to be used sometimes to handle files with long
+** string fields, but in other cases (such as bug 1202) the decimals field
+** just seems to indicate some sort of preferred formatting, not very
+** wide fields. So I have disabled this code. FrankW.
psDBF->panFieldSize[iField] = pabyFInfo[16] + pabyFInfo[17]*256;
psDBF->panFieldDecimals[iField] = 0;
+*/
}
psDBF->pachFieldType[iField] = (char) pabyFInfo[11];
@@ -553,6 +564,9 @@
void SHPAPI_CALL
DBFClose(DBFHandle psDBF)
{
+ if( psDBF == NULL )
+ return;
+
/* -------------------------------------------------------------------- */
/* Write out header if not already written. */
/* -------------------------------------------------------------------- */
@@ -571,7 +585,7 @@
/* -------------------------------------------------------------------- */
/* Close, and free resources. */
/* -------------------------------------------------------------------- */
- fclose( psDBF->fp );
+ psDBF->sHooks.FClose( psDBF->fp );
if( psDBF->panFieldOffset != NULL )
{
@@ -581,33 +595,61 @@
free( psDBF->pachFieldType );
}
+ if( psDBF->pszWorkField != NULL )
+ free( psDBF->pszWorkField );
+
free( psDBF->pszHeader );
free( psDBF->pszCurrentRecord );
+ free( psDBF->pszCodePage );
free( psDBF );
+}
- if( pszStringField != NULL )
- {
- free( pszStringField );
- pszStringField = NULL;
- nStringFieldLen = 0;
- }
+/************************************************************************/
+/* DBFCreate() */
+/* */
+/* Create a new .dbf file with default code page LDID/87 (0x57) */
+/************************************************************************/
+
+DBFHandle SHPAPI_CALL
+DBFCreate( const char * pszFilename )
+
+{
+ return DBFCreateEx( pszFilename, "LDID/87" ); // 0x57
}
/************************************************************************/
+/* DBFCreateEx() */
+/* */
+/* Create a new .dbf file. */
+/************************************************************************/
+
+DBFHandle SHPAPI_CALL
+DBFCreateEx( const char * pszFilename, const char* pszCodePage )
+
+{
+ SAHooks sHooks;
+
+ SASetupDefaultHooks( &sHooks );
+
+ return DBFCreateLL( pszFilename, pszCodePage , &sHooks );
+}
+
+/************************************************************************/
/* DBFCreate() */
/* */
/* Create a new .dbf file. */
/************************************************************************/
DBFHandle SHPAPI_CALL
-DBFCreate( const char * pszFilename )
+DBFCreateLL( const char * pszFilename, const char * pszCodePage, SAHooks *psHooks )
{
DBFHandle psDBF;
- FILE *fp;
+ SAFile fp;
char *pszFullname, *pszBasename;
- int i;
+ int i, ldid = -1;
+ char chZero = '\0';
/* -------------------------------------------------------------------- */
/* Compute the base (layer) name. If there is any extension */
@@ -625,29 +667,52 @@
pszFullname = (char *) malloc(strlen(pszBasename) + 5);
sprintf( pszFullname, "%s.dbf", pszBasename );
- free( pszBasename );
/* -------------------------------------------------------------------- */
/* Create the file. */
/* -------------------------------------------------------------------- */
- fp = fopen( pszFullname, "wb" );
+ fp = psHooks->FOpen( pszFullname, "wb" );
if( fp == NULL )
return( NULL );
+
+ psHooks->FWrite( &chZero, 1, 1, fp );
+ psHooks->FClose( fp );
- fputc( 0, fp );
- fclose( fp );
-
- fp = fopen( pszFullname, "rb+" );
+ fp = psHooks->FOpen( pszFullname, "rb+" );
if( fp == NULL )
return( NULL );
+
+ sprintf( pszFullname, "%s.cpg", pszBasename );
+ if( pszCodePage != NULL )
+ {
+ if( strncmp( pszCodePage, "LDID/", 5 ) == 0 )
+ {
+ ldid = atoi( pszCodePage + 5 );
+ if( ldid > 255 )
+ ldid = -1; // don't use 0 to indicate out of range as LDID/0 is a valid one
+ }
+ if( ldid < 0 )
+ {
+ SAFile fpCPG = psHooks->FOpen( pszFullname, "w" );
+ psHooks->FWrite( (char*) pszCodePage, strlen(pszCodePage), 1, fpCPG );
+ psHooks->FClose( fpCPG );
+ }
+ }
+ if( pszCodePage == NULL || ldid >= 0 )
+ {
+ psHooks->Remove( pszFullname );
+ }
+
+ free( pszBasename );
free( pszFullname );
/* -------------------------------------------------------------------- */
/* Create the info structure. */
/* -------------------------------------------------------------------- */
- psDBF = (DBFHandle) malloc(sizeof(DBFInfo));
+ psDBF = (DBFHandle) calloc(1,sizeof(DBFInfo));
+ memcpy( &(psDBF->sHooks), psHooks, sizeof(SAHooks) );
psDBF->fp = fp;
psDBF->nRecords = 0;
psDBF->nFields = 0;
@@ -666,14 +731,21 @@
psDBF->bNoHeader = TRUE;
+ psDBF->iLanguageDriver = ldid > 0 ? ldid : 0;
+ psDBF->pszCodePage = NULL;
+ if( pszCodePage )
+ {
+ psDBF->pszCodePage = (char * ) malloc( strlen(pszCodePage) + 1 );
+ strcpy( psDBF->pszCodePage, pszCodePage );
+ }
+
return( psDBF );
}
/************************************************************************/
/* DBFAddField() */
/* */
-/* Add a field to a newly created .dbf file before any records */
-/* are written. */
+/* Add a field to a newly created .dbf or to an existing one */
/************************************************************************/
int SHPAPI_CALL
@@ -681,24 +753,50 @@
DBFFieldType eType, int nWidth, int nDecimals )
{
+ char chNativeType = 'C';
+
+ if( eType == FTLogical )
+ chNativeType = 'L';
+ else if( eType == FTString )
+ chNativeType = 'C';
+ else
+ chNativeType = 'N';
+
+ return DBFAddNativeFieldType( psDBF, pszFieldName, chNativeType,
+ nWidth, nDecimals );
+}
+
+/************************************************************************/
+/* DBFAddField() */
+/* */
+/* Add a field to a newly created .dbf file before any records */
+/* are written. */
+/************************************************************************/
+
+int SHPAPI_CALL
+DBFAddNativeFieldType(DBFHandle psDBF, const char * pszFieldName,
+ char chType, int nWidth, int nDecimals )
+
+{
char *pszFInfo;
int i;
+ int nOldRecordLength, nOldHeaderLength;
+ char *pszRecord;
+ char chFieldFill;
+ SAOffset nRecordOffset;
/* -------------------------------------------------------------------- */
/* Do some checking to ensure we can add records to this file. */
/* -------------------------------------------------------------------- */
- if( psDBF->nRecords > 0 )
- return( -1 );
+ if( nWidth < 1 )
+ return -1;
- if( !psDBF->bNoHeader )
- return( -1 );
+ if( nWidth > 255 )
+ nWidth = 255;
- if( eType != FTDouble && nDecimals != 0 )
- return( -1 );
+ nOldRecordLength = psDBF->nRecordLength;
+ nOldHeaderLength = psDBF->nHeaderLength;
- if( nWidth < 1 )
- return -1;
-
/* -------------------------------------------------------------------- */
/* SfRealloc all the arrays larger to hold the additional field */
/* information. */
@@ -706,16 +804,16 @@
psDBF->nFields++;
psDBF->panFieldOffset = (int *)
- SfRealloc( psDBF->panFieldOffset, sizeof(int) * psDBF->nFields );
+ SfRealloc( psDBF->panFieldOffset, sizeof(int) * psDBF->nFields );
psDBF->panFieldSize = (int *)
- SfRealloc( psDBF->panFieldSize, sizeof(int) * psDBF->nFields );
+ SfRealloc( psDBF->panFieldSize, sizeof(int) * psDBF->nFields );
psDBF->panFieldDecimals = (int *)
- SfRealloc( psDBF->panFieldDecimals, sizeof(int) * psDBF->nFields );
+ SfRealloc( psDBF->panFieldDecimals, sizeof(int) * psDBF->nFields );
psDBF->pachFieldType = (char *)
- SfRealloc( psDBF->pachFieldType, sizeof(char) * psDBF->nFields );
+ SfRealloc( psDBF->pachFieldType, sizeof(char) * psDBF->nFields );
/* -------------------------------------------------------------------- */
/* Assign the new field information fields. */
@@ -724,14 +822,8 @@
psDBF->nRecordLength += nWidth;
psDBF->panFieldSize[psDBF->nFields-1] = nWidth;
psDBF->panFieldDecimals[psDBF->nFields-1] = nDecimals;
+ psDBF->pachFieldType[psDBF->nFields-1] = chType;
- if( eType == FTLogical )
- psDBF->pachFieldType[psDBF->nFields-1] = 'L';
- else if( eType == FTString )
- psDBF->pachFieldType[psDBF->nFields-1] = 'C';
- else
- psDBF->pachFieldType[psDBF->nFields-1] = 'N';
-
/* -------------------------------------------------------------------- */
/* Extend the required header information. */
/* -------------------------------------------------------------------- */
@@ -752,7 +844,7 @@
pszFInfo[11] = psDBF->pachFieldType[psDBF->nFields-1];
- if( eType == FTString )
+ if( chType == 'C' )
{
pszFInfo[16] = (unsigned char) (nWidth % 256);
pszFInfo[17] = (unsigned char) (nWidth / 256);
@@ -767,8 +859,61 @@
/* Make the current record buffer appropriately larger. */
/* -------------------------------------------------------------------- */
psDBF->pszCurrentRecord = (char *) SfRealloc(psDBF->pszCurrentRecord,
- psDBF->nRecordLength);
+ psDBF->nRecordLength);
+ /* we're done if dealing with new .dbf */
+ if( psDBF->bNoHeader )
+ return( psDBF->nFields - 1 );
+
+/* -------------------------------------------------------------------- */
+/* For existing .dbf file, shift records */
+/* -------------------------------------------------------------------- */
+
+ /* alloc record */
+ pszRecord = (char *) malloc(sizeof(char) * psDBF->nRecordLength);
+
+ switch (chType)
+ {
+ case 'N':
+ case 'F':
+ chFieldFill = '*';
+ break;
+ case 'D':
+ chFieldFill = '0';
+ break;
+ case 'L':
+ chFieldFill = '?';
+ break;
+ default:
+ chFieldFill = ' ';
+ break;
+ }
+
+ for (i = psDBF->nRecords-1; i >= 0; --i)
+ {
+ nRecordOffset = nOldRecordLength * (SAOffset) i + nOldHeaderLength;
+
+ /* load record */
+ psDBF->sHooks.FSeek( psDBF->fp, nRecordOffset, 0 );
+ psDBF->sHooks.FRead( pszRecord, nOldRecordLength, 1, psDBF->fp );
+
+ /* set new field's value to NULL */
+ memset(pszRecord + nOldRecordLength, chFieldFill, nWidth);
+
+ nRecordOffset = psDBF->nRecordLength * (SAOffset) i + psDBF->nHeaderLength;
+
+ /* move record to the new place*/
+ psDBF->sHooks.FSeek( psDBF->fp, nRecordOffset, 0 );
+ psDBF->sHooks.FWrite( pszRecord, psDBF->nRecordLength, 1, psDBF->fp );
+ }
+
+ /* free record */
+ free(pszRecord);
+
+ /* force update of header with new header, record length and new field */
+ psDBF->bNoHeader = TRUE;
+ DBFUpdateHeader( psDBF );
+
return( psDBF->nFields-1 );
}
@@ -782,12 +927,9 @@
char chReqType )
{
- int nRecordOffset;
unsigned char *pabyRec;
void *pReturnField = NULL;
- static double dDoubleField;
-
/* -------------------------------------------------------------------- */
/* Verify selection. */
/* -------------------------------------------------------------------- */
@@ -800,59 +942,42 @@
/* -------------------------------------------------------------------- */
/* Have we read the record? */
/* -------------------------------------------------------------------- */
- if( psDBF->nCurrentRecord != hEntity )
- {
- DBFFlushRecord( psDBF );
+ if( !DBFLoadRecord( psDBF, hEntity ) )
+ return NULL;
- nRecordOffset = psDBF->nRecordLength * hEntity + psDBF->nHeaderLength;
-
- if( fseek( psDBF->fp, nRecordOffset, 0 ) != 0 )
- {
- fprintf( stderr, "fseek(%d) failed on DBF file.\n",
- nRecordOffset );
- return NULL;
- }
-
- if( fread( psDBF->pszCurrentRecord, psDBF->nRecordLength,
- 1, psDBF->fp ) != 1 )
- {
- fprintf( stderr, "fread(%d) failed on DBF file.\n",
- psDBF->nRecordLength );
- return NULL;
- }
-
- psDBF->nCurrentRecord = hEntity;
- }
-
pabyRec = (unsigned char *) psDBF->pszCurrentRecord;
/* -------------------------------------------------------------------- */
-/* Ensure our field buffer is large enough to hold this buffer. */
+/* Ensure we have room to extract the target field. */
/* -------------------------------------------------------------------- */
- if( psDBF->panFieldSize[iField]+1 > nStringFieldLen )
+ if( psDBF->panFieldSize[iField] >= psDBF->nWorkFieldLength )
{
- nStringFieldLen = psDBF->panFieldSize[iField]*2 + 10;
- pszStringField = (char *) SfRealloc(pszStringField,nStringFieldLen);
+ psDBF->nWorkFieldLength = psDBF->panFieldSize[iField] + 100;
+ if( psDBF->pszWorkField == NULL )
+ psDBF->pszWorkField = (char *) malloc(psDBF->nWorkFieldLength);
+ else
+ psDBF->pszWorkField = (char *) realloc(psDBF->pszWorkField,
+ psDBF->nWorkFieldLength);
}
/* -------------------------------------------------------------------- */
/* Extract the requested field. */
/* -------------------------------------------------------------------- */
- strncpy( pszStringField,
+ strncpy( psDBF->pszWorkField,
((const char *) pabyRec) + psDBF->panFieldOffset[iField],
psDBF->panFieldSize[iField] );
- pszStringField[psDBF->panFieldSize[iField]] = '\0';
+ psDBF->pszWorkField[psDBF->panFieldSize[iField]] = '\0';
- pReturnField = pszStringField;
+ pReturnField = psDBF->pszWorkField;
/* -------------------------------------------------------------------- */
/* Decode the field. */
/* -------------------------------------------------------------------- */
if( chReqType == 'N' )
{
- dDoubleField = (*atof_function)(pszStringField);
+ psDBF->dfDoubleField = psDBF->sHooks.Atof(psDBF->pszWorkField);
- pReturnField = &dDoubleField;
+ pReturnField = &(psDBF->dfDoubleField);
}
/* -------------------------------------------------------------------- */
@@ -863,7 +988,7 @@
{
char *pchSrc, *pchDst;
- pchDst = pchSrc = pszStringField;
+ pchDst = pchSrc = psDBF->pszWorkField;
while( *pchSrc == ' ' )
pchSrc++;
@@ -871,7 +996,7 @@
*(pchDst++) = *(pchSrc++);
*pchDst = '\0';
- while( pchDst != pszStringField && *(--pchDst) == ' ' )
+ while( pchDst != psDBF->pszWorkField && *(--pchDst) == ' ' )
*pchDst = '\0';
}
#endif
@@ -958,6 +1083,7 @@
{
const char *pszValue;
+ int i;
pszValue = DBFReadStringAttribute( psDBF, iRecord, iField );
@@ -968,9 +1094,21 @@
{
case 'N':
case 'F':
- /* NULL numeric fields have value "****************" */
- return pszValue[0] == '*';
+ /*
+ ** We accept all asterisks or all blanks as NULL
+ ** though according to the spec I think it should be all
+ ** asterisks.
+ */
+ if( pszValue[0] == '*' )
+ return TRUE;
+ for( i = 0; pszValue[i] != '\0'; i++ )
+ {
+ if( pszValue[i] != ' ' )
+ return FALSE;
+ }
+ return TRUE;
+
case 'D':
/* NULL date fields have value "00000000" */
return strncmp(pszValue,"00000000",8) == 0;
@@ -1045,10 +1183,10 @@
return( FTLogical);
else if( psDBF->pachFieldType[iField] == 'N'
- || psDBF->pachFieldType[iField] == 'F'
- || psDBF->pachFieldType[iField] == 'D' )
+ || psDBF->pachFieldType[iField] == 'F' )
{
- if( psDBF->panFieldDecimals[iField] > 0 )
+ if( psDBF->panFieldDecimals[iField] > 0
+ || psDBF->panFieldSize[iField] > 10 )
return( FTDouble );
else
return( FTInteger );
@@ -1069,7 +1207,7 @@
void * pValue )
{
- int nRecordOffset, i, j, nRetResult = TRUE;
+ int i, j, nRetResult = TRUE;
unsigned char *pabyRec;
char szSField[400], szFormat[20];
@@ -1087,7 +1225,8 @@
/* -------------------------------------------------------------------- */
if( hEntity == psDBF->nRecords )
{
- DBFFlushRecord( psDBF );
+ if( !DBFFlushRecord( psDBF ) )
+ return FALSE;
psDBF->nRecords++;
for( i = 0; i < psDBF->nRecordLength; i++ )
@@ -1100,18 +1239,9 @@
/* Is this an existing record, but different than the last one */
/* we accessed? */
/* -------------------------------------------------------------------- */
- if( psDBF->nCurrentRecord != hEntity )
- {
- DBFFlushRecord( psDBF );
+ if( !DBFLoadRecord( psDBF, hEntity ) )
+ return FALSE;
- nRecordOffset = psDBF->nRecordLength * hEntity + psDBF->nHeaderLength;
-
- fseek( psDBF->fp, nRecordOffset, 0 );
- fread( psDBF->pszCurrentRecord, psDBF->nRecordLength, 1, psDBF->fp );
-
- psDBF->nCurrentRecord = hEntity;
- }
-
pabyRec = (unsigned char *) psDBF->pszCurrentRecord;
psDBF->bCurrentRecordModified = TRUE;
@@ -1147,7 +1277,7 @@
default:
/* empty string fields are considered NULL */
- memset( (char *) (pabyRec+psDBF->panFieldOffset[iField]), '\0',
+ memset( (char *) (pabyRec+psDBF->panFieldOffset[iField]), ' ',
psDBF->panFieldSize[iField] );
break;
}
@@ -1166,7 +1296,7 @@
{
int nWidth = psDBF->panFieldSize[iField];
- if( sizeof(szSField)-2 < nWidth )
+ if( (int) sizeof(szSField)-2 < nWidth )
nWidth = sizeof(szSField)-2;
sprintf( szFormat, "%%%dd", nWidth );
@@ -1184,7 +1314,7 @@
{
int nWidth = psDBF->panFieldSize[iField];
- if( sizeof(szSField)-2 < nWidth )
+ if( (int) sizeof(szSField)-2 < nWidth )
nWidth = sizeof(szSField)-2;
sprintf( szFormat, "%%%d.%df",
@@ -1240,7 +1370,7 @@
void * pValue )
{
- int nRecordOffset, i, j;
+ int i, j;
unsigned char *pabyRec;
/* -------------------------------------------------------------------- */
@@ -1257,7 +1387,8 @@
/* -------------------------------------------------------------------- */
if( hEntity == psDBF->nRecords )
{
- DBFFlushRecord( psDBF );
+ if( !DBFFlushRecord( psDBF ) )
+ return FALSE;
psDBF->nRecords++;
for( i = 0; i < psDBF->nRecordLength; i++ )
@@ -1270,18 +1401,9 @@
/* Is this an existing record, but different than the last one */
/* we accessed? */
/* -------------------------------------------------------------------- */
- if( psDBF->nCurrentRecord != hEntity )
- {
- DBFFlushRecord( psDBF );
+ if( !DBFLoadRecord( psDBF, hEntity ) )
+ return FALSE;
- nRecordOffset = psDBF->nRecordLength * hEntity + psDBF->nHeaderLength;
-
- fseek( psDBF->fp, nRecordOffset, 0 );
- fread( psDBF->pszCurrentRecord, psDBF->nRecordLength, 1, psDBF->fp );
-
- psDBF->nCurrentRecord = hEntity;
- }
-
pabyRec = (unsigned char *) psDBF->pszCurrentRecord;
/* -------------------------------------------------------------------- */
@@ -1386,7 +1508,7 @@
DBFWriteTuple(DBFHandle psDBF, int hEntity, void * pRawTuple )
{
- int nRecordOffset, i;
+ int i;
unsigned char *pabyRec;
/* -------------------------------------------------------------------- */
@@ -1403,7 +1525,8 @@
/* -------------------------------------------------------------------- */
if( hEntity == psDBF->nRecords )
{
- DBFFlushRecord( psDBF );
+ if( !DBFFlushRecord( psDBF ) )
+ return FALSE;
psDBF->nRecords++;
for( i = 0; i < psDBF->nRecordLength; i++ )
@@ -1416,18 +1539,9 @@
/* Is this an existing record, but different than the last one */
/* we accessed? */
/* -------------------------------------------------------------------- */
- if( psDBF->nCurrentRecord != hEntity )
- {
- DBFFlushRecord( psDBF );
+ if( !DBFLoadRecord( psDBF, hEntity ) )
+ return FALSE;
- nRecordOffset = psDBF->nRecordLength * hEntity + psDBF->nHeaderLength;
-
- fseek( psDBF->fp, nRecordOffset, 0 );
- fread( psDBF->pszCurrentRecord, psDBF->nRecordLength, 1, psDBF->fp );
-
- psDBF->nCurrentRecord = hEntity;
- }
-
pabyRec = (unsigned char *) psDBF->pszCurrentRecord;
memcpy ( pabyRec, pRawTuple, psDBF->nRecordLength );
@@ -1439,49 +1553,23 @@
}
/************************************************************************/
-/* DBFReadTuple() */
+/* DBFReadTuple() */
/* */
-/* Read one of the attribute fields of a record. */
+/* Read a complete record. Note that the result is only valid */
+/* till the next record read for any reason. */
/************************************************************************/
const char SHPAPI_CALL1(*)
DBFReadTuple(DBFHandle psDBF, int hEntity )
{
- int nRecordOffset;
- unsigned char *pabyRec;
- static char *pReturnTuple = NULL;
-
- static int nTupleLen = 0;
-
-/* -------------------------------------------------------------------- */
-/* Have we read the record? */
-/* -------------------------------------------------------------------- */
if( hEntity < 0 || hEntity >= psDBF->nRecords )
return( NULL );
- if( psDBF->nCurrentRecord != hEntity )
- {
- DBFFlushRecord( psDBF );
+ if( !DBFLoadRecord( psDBF, hEntity ) )
+ return NULL;
- nRecordOffset = psDBF->nRecordLength * hEntity + psDBF->nHeaderLength;
-
- fseek( psDBF->fp, nRecordOffset, 0 );
- fread( psDBF->pszCurrentRecord, psDBF->nRecordLength, 1, psDBF->fp );
-
- psDBF->nCurrentRecord = hEntity;
- }
-
- pabyRec = (unsigned char *) psDBF->pszCurrentRecord;
-
- if ( nTupleLen < psDBF->nRecordLength) {
- nTupleLen = psDBF->nRecordLength;
- pReturnTuple = (char *) SfRealloc(pReturnTuple, psDBF->nRecordLength);
- }
-
- memcpy ( pReturnTuple, pabyRec, psDBF->nRecordLength );
-
- return( pReturnTuple );
+ return (const char *) psDBF->pszCurrentRecord;
}
/************************************************************************/
@@ -1495,24 +1583,24 @@
{
DBFHandle newDBF;
- newDBF = DBFCreate ( pszFilename );
+ newDBF = DBFCreateEx ( pszFilename, psDBF->pszCodePage );
if ( newDBF == NULL ) return ( NULL );
- newDBF->pszHeader = (char *) malloc ( 32 * psDBF->nFields );
- memcpy ( newDBF->pszHeader, psDBF->pszHeader, 32 * psDBF->nFields );
-
newDBF->nFields = psDBF->nFields;
newDBF->nRecordLength = psDBF->nRecordLength;
- newDBF->nHeaderLength = 32 * (psDBF->nFields+1);
+ newDBF->nHeaderLength = psDBF->nHeaderLength;
+ newDBF->pszHeader = (char *) malloc ( newDBF->nHeaderLength );
+ memcpy ( newDBF->pszHeader, psDBF->pszHeader, newDBF->nHeaderLength );
+
newDBF->panFieldOffset = (int *) malloc ( sizeof(int) * psDBF->nFields );
memcpy ( newDBF->panFieldOffset, psDBF->panFieldOffset, sizeof(int) * psDBF->nFields );
newDBF->panFieldSize = (int *) malloc ( sizeof(int) * psDBF->nFields );
memcpy ( newDBF->panFieldSize, psDBF->panFieldSize, sizeof(int) * psDBF->nFields );
newDBF->panFieldDecimals = (int *) malloc ( sizeof(int) * psDBF->nFields );
memcpy ( newDBF->panFieldDecimals, psDBF->panFieldDecimals, sizeof(int) * psDBF->nFields );
- newDBF->pachFieldType = (char *) malloc ( sizeof(int) * psDBF->nFields );
- memcpy ( newDBF->pachFieldType, psDBF->pachFieldType, sizeof(int) * psDBF->nFields );
+ newDBF->pachFieldType = (char *) malloc ( sizeof(char) * psDBF->nFields );
+ memcpy ( newDBF->pachFieldType, psDBF->pachFieldType, sizeof(char)*psDBF->nFields );
newDBF->bNoHeader = TRUE;
newDBF->bUpdated = TRUE;
@@ -1592,3 +1680,192 @@
}
return(-1);
}
+
+/************************************************************************/
+/* DBFIsRecordDeleted() */
+/* */
+/* Returns TRUE if the indicated record is deleted, otherwise */
+/* it returns FALSE. */
+/************************************************************************/
+
+int SHPAPI_CALL DBFIsRecordDeleted( DBFHandle psDBF, int iShape )
+
+{
+/* -------------------------------------------------------------------- */
+/* Verify selection. */
+/* -------------------------------------------------------------------- */
+ if( iShape < 0 || iShape >= psDBF->nRecords )
+ return TRUE;
+
+/* -------------------------------------------------------------------- */
+/* Have we read the record? */
+/* -------------------------------------------------------------------- */
+ if( !DBFLoadRecord( psDBF, iShape ) )
+ return FALSE;
+
+/* -------------------------------------------------------------------- */
+/* '*' means deleted. */
+/* -------------------------------------------------------------------- */
+ return psDBF->pszCurrentRecord[0] == '*';
+}
+
+/************************************************************************/
+/* DBFMarkRecordDeleted() */
+/************************************************************************/
+
+int SHPAPI_CALL DBFMarkRecordDeleted( DBFHandle psDBF, int iShape,
+ int bIsDeleted )
+
+{
+ char chNewFlag;
+
+/* -------------------------------------------------------------------- */
+/* Verify selection. */
+/* -------------------------------------------------------------------- */
+ if( iShape < 0 || iShape >= psDBF->nRecords )
+ return FALSE;
+
+/* -------------------------------------------------------------------- */
+/* Is this an existing record, but different than the last one */
+/* we accessed? */
+/* -------------------------------------------------------------------- */
+ if( !DBFLoadRecord( psDBF, iShape ) )
+ return FALSE;
+
+/* -------------------------------------------------------------------- */
+/* Assign value, marking record as dirty if it changes. */
+/* -------------------------------------------------------------------- */
+ if( bIsDeleted )
+ chNewFlag = '*';
+ else
+ chNewFlag = ' ';
+
+ if( psDBF->pszCurrentRecord[0] != chNewFlag )
+ {
+ psDBF->bCurrentRecordModified = TRUE;
+ psDBF->bUpdated = TRUE;
+ psDBF->pszCurrentRecord[0] = chNewFlag;
+ }
+
+ return TRUE;
+}
+
+/************************************************************************/
+/* DBFGetCodePage */
+/************************************************************************/
+
+const char SHPAPI_CALL1(*)
+DBFGetCodePage(DBFHandle psDBF )
+{
+ if( psDBF == NULL )
+ return NULL;
+ return psDBF->pszCodePage;
+}
+
+/************************************************************************/
+/* DBFDeleteField() */
+/* */
+/* Remove a field from a .dbf file */
+/************************************************************************/
+
+int SHPAPI_CALL
+DBFDeleteField(DBFHandle psDBF, int iField)
+{
+ int nOldRecordLength, nOldHeaderLength;
+ int nDeletedFieldOffset, nDeletedFieldSize;
+ SAOffset nRecordOffset;
+ char* pszRecord;
+ int i, iRecord;
+
+ if (iField < 0 || iField >= psDBF->nFields)
+ return FALSE;
+
+ /* make sure that everything is written in .dbf */
+ if( !DBFFlushRecord( psDBF ) )
+ return FALSE;
+
+ /* get information about field to be deleted */
+ nOldRecordLength = psDBF->nRecordLength;
+ nOldHeaderLength = psDBF->nHeaderLength;
+ nDeletedFieldOffset = psDBF->panFieldOffset[iField];
+ nDeletedFieldSize = psDBF->panFieldSize[iField];
+
+ /* update fields info */
+ for (i = iField + 1; i < psDBF->nFields; i++)
+ {
+ psDBF->panFieldOffset[i-1] = psDBF->panFieldOffset[i] - nDeletedFieldSize;
+ psDBF->panFieldSize[i-1] = psDBF->panFieldSize[i];
+ psDBF->panFieldDecimals[i-1] = psDBF->panFieldDecimals[i];
+ psDBF->pachFieldType[i-1] = psDBF->pachFieldType[i];
+ }
+
+ /* resize fields arrays */
+ psDBF->nFields--;
+
+ psDBF->panFieldOffset = (int *)
+ SfRealloc( psDBF->panFieldOffset, sizeof(int) * psDBF->nFields );
+
+ psDBF->panFieldSize = (int *)
+ SfRealloc( psDBF->panFieldSize, sizeof(int) * psDBF->nFields );
+
+ psDBF->panFieldDecimals = (int *)
+ SfRealloc( psDBF->panFieldDecimals, sizeof(int) * psDBF->nFields );
+
+ psDBF->pachFieldType = (char *)
+ SfRealloc( psDBF->pachFieldType, sizeof(char) * psDBF->nFields );
+
+ /* update header information */
+ psDBF->nHeaderLength -= 32;
+ psDBF->nRecordLength -= nDeletedFieldSize;
+
+ /* overwrite field information in header */
+ memcpy(psDBF->pszHeader + iField*32,
+ psDBF->pszHeader + (iField+1)*32,
+ sizeof(char) * (psDBF->nFields - iField)*32);
+
+ psDBF->pszHeader = (char *) SfRealloc(psDBF->pszHeader,psDBF->nFields*32);
+
+ /* update size of current record appropriately */
+ psDBF->pszCurrentRecord = (char *) SfRealloc(psDBF->pszCurrentRecord,
+ psDBF->nRecordLength);
+
+ /* we're done if we're dealing with not yet created .dbf */
+ if ( psDBF->bNoHeader && psDBF->nRecords == 0 )
+ return TRUE;
+
+ /* force update of header with new header and record length */
+ psDBF->bNoHeader = TRUE;
+ DBFUpdateHeader( psDBF );
+
+ /* alloc record */
+ pszRecord = (char *) malloc(sizeof(char) * nOldRecordLength);
+
+ /* shift records to their new positions */
+ for (iRecord = 0; iRecord < psDBF->nRecords; iRecord++)
+ {
+ nRecordOffset =
+ nOldRecordLength * (SAOffset) iRecord + nOldHeaderLength;
+
+ /* load record */
+ psDBF->sHooks.FSeek( psDBF->fp, nRecordOffset, 0 );
+ psDBF->sHooks.FRead( pszRecord, nOldRecordLength, 1, psDBF->fp );
+
+ nRecordOffset =
+ psDBF->nRecordLength * (SAOffset) iRecord + psDBF->nHeaderLength;
+
+ /* move record in two steps */
+ psDBF->sHooks.FSeek( psDBF->fp, nRecordOffset, 0 );
+ psDBF->sHooks.FWrite( pszRecord, nDeletedFieldOffset, 1, psDBF->fp );
+ psDBF->sHooks.FWrite( pszRecord + nDeletedFieldOffset + nDeletedFieldSize,
+ nOldRecordLength - nDeletedFieldOffset - nDeletedFieldSize,
+ 1, psDBF->fp );
+
+ }
+
+ /* TODO: truncate file */
+
+ /* free record */
+ free(pszRecord);
+
+ return TRUE;
+}
Copied: trunk/thuban/libraries/shapelib/safileio.c (from rev 2888, branches/WIP-pyshapelib-Unicode/thuban/libraries/shapelib/safileio.c)
Modified: trunk/thuban/libraries/shapelib/shapefil.h
===================================================================
--- trunk/thuban/libraries/shapelib/shapefil.h 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/shapelib/shapefil.h 2009-09-27 20:36:21 UTC (rev 2889)
@@ -1,5 +1,5 @@
-#ifndef _SHAPEFILE_H_INCLUDED
-#define _SHAPEFILE_H_INCLUDED
+#ifndef SHAPEFILE_H_INCLUDED
+#define SHAPEFILE_H_INCLUDED
/******************************************************************************
* $Id$
@@ -36,46 +36,70 @@
* DEALINGS IN THE SOFTWARE.
******************************************************************************
*
- * $Log$
- * Revision 1.3 2004/05/17 15:47:57 bh
- * Update to newest shapelib and get rid of Thuban specific extensions,
- * i.e. use the new DBFUpdateHeader instead of our DBFCommit kludge
+ * $Log: shapefil.h,v $
+ * Revision 1.46 2008-11-12 14:28:15 fwarmerdam
+ * DBFCreateField() now works on files with records
*
- * * libraries/shapelib/shpopen.c: Update to version from current
- * shapelib CVS.
+ * Revision 1.45 2008/11/11 17:47:10 fwarmerdam
+ * added DBFDeleteField() function
*
- * * libraries/shapelib/shapefil.h: Update to version from current
- * shapelib CVS.
+ * Revision 1.44 2008/01/16 20:05:19 bram
+ * Add file hooks that accept UTF-8 encoded filenames on some platforms. Use SASetupUtf8Hooks
+ * tosetup the hooks and check SHPAPI_UTF8_HOOKS for its availability. Currently, this
+ * is only available on the Windows platform that decodes the UTF-8 filenames to wide
+ * character strings and feeds them to _wfopen and _wremove.
*
- * * libraries/shapelib/dbfopen.c: Update to version from current
- * shapelib CVS.
- * (DBFCommit): Effectively removed since shapelib itself has
- * DBFUpdateHeader now which is better for what DBFCommit wanted to
- * achieve.
- * We're now using an unmodified version of dbfopen.
+ * Revision 1.43 2008/01/10 16:35:30 fwarmerdam
+ * avoid _ prefix on #defined symbols (bug 1840)
*
- * * libraries/pyshapelib/dbflib_wrap.c, libraries/pyshapelib/dbflib.py:
- * Update from dbflib.i
+ * Revision 1.42 2007/12/18 18:28:14 bram
+ * - create hook for client specific atof (bugzilla ticket 1615)
+ * - check for NULL handle before closing cpCPG file, and close after reading.
*
- * * libraries/pyshapelib/dbflib.i (DBFInfo_commit): New. Implementation of
- * the commit method. This new indirection is necessary because we use the
- * DBFUpdateHeader function now which is not available in shapelib <=
- * 1.2.10
- * (DBFFile::commit): Use DBFInfo_commit as implementation
- * (pragma __class__): New. Kludge to remove the commit method when
- * the DBFUpdateHeader function isn't available
- * (_have_commit): New. Helper for the pragma kludge.
+ * Revision 1.41 2007/12/15 20:25:32 bram
+ * dbfopen.c now reads the Code Page information from the DBF file, and exports
+ * this information as a string through the DBFGetCodePage function. This is
+ * either the number from the LDID header field ("LDID/<number>") or as the
+ * content of an accompanying .CPG file. When creating a DBF file, the code can
+ * be set using DBFCreateEx.
*
- * * libraries/pyshapelib/setup.py (dbf_macros): New. Return the
- * preprocessor macros needed to compile the dbflib wrapper. Determine
- * whether DBFUpdateHeader is available and define the right value of
- * HAVE_UPDATE_HEADER
- * (extensions): Use dbf_macros for the dbflibc extension
+ * Revision 1.40 2007/12/06 07:00:25 fwarmerdam
+ * dbfopen now using SAHooks for fileio
*
- * * setup.py (extensions): Add the HAVE_UPDATE_HEADER macro with
- * value '1' to the Lib.dbflibc extension. This simply reflects the
- * shapelib and pyshapelib updates
+ * Revision 1.39 2007/12/04 20:37:56 fwarmerdam
+ * preliminary implementation of hooks api for io and errors
*
+ * Revision 1.38 2007/11/21 22:39:56 fwarmerdam
+ * close shx file in readonly mode (GDAL #1956)
+ *
+ * Revision 1.37 2007/10/27 03:31:14 fwarmerdam
+ * limit default depth of tree to 12 levels (gdal ticket #1594)
+ *
+ * Revision 1.36 2007/09/10 23:33:15 fwarmerdam
+ * Upstreamed support for visibility flag in SHPAPI_CALL for the needs
+ * of GDAL (gdal ticket #1810).
+ *
+ * Revision 1.35 2007/09/03 19:48:10 fwarmerdam
+ * move DBFReadAttribute() static dDoubleField into dbfinfo
+ *
+ * Revision 1.34 2006/06/17 15:33:32 fwarmerdam
+ * added pszWorkField - bug 1202 (rso)
+ *
+ * Revision 1.33 2006/02/15 01:14:30 fwarmerdam
+ * added DBFAddNativeFieldType
+ *
+ * Revision 1.32 2006/01/26 15:07:32 fwarmerdam
+ * add bMeasureIsUsed flag from Craig Bruce: Bug 1249
+ *
+ * Revision 1.31 2006/01/05 01:27:27 fwarmerdam
+ * added dbf deletion mark/fetch
+ *
+ * Revision 1.30 2005/01/03 22:30:13 fwarmerdam
+ * added support for saved quadtrees
+ *
+ * Revision 1.29 2004/09/26 20:09:35 fwarmerdam
+ * avoid rcsid warnings
+ *
* Revision 1.28 2003/12/29 06:02:18 fwarmerdam
* added cpl_error.h option
*
@@ -96,67 +120,6 @@
*
* Revision 1.22 2002/01/15 14:32:00 warmerda
* try to improve SHPAPI_CALL docs
- *
- * Revision 1.21 2001/11/01 16:29:55 warmerda
- * move pabyRec into SHPInfo for thread safety
- *
- * Revision 1.20 2001/07/20 13:06:02 warmerda
- * fixed SHPAPI attribute for SHPTreeFindLikelyShapes
- *
- * Revision 1.19 2001/05/31 19:20:13 warmerda
- * added DBFGetFieldIndex()
- *
- * Revision 1.18 2001/05/31 18:15:40 warmerda
- * Added support for NULL fields in DBF files
- *
- * Revision 1.17 2001/05/23 13:36:52 warmerda
- * added use of SHPAPI_CALL
- *
- * Revision 1.16 2000/09/25 14:15:59 warmerda
- * added DBFGetNativeFieldType()
- *
- * Revision 1.15 2000/02/16 16:03:51 warmerda
- * added null shape support
- *
- * Revision 1.14 1999/11/05 14:12:05 warmerda
- * updated license terms
- *
- * Revision 1.13 1999/06/02 18:24:21 warmerda
- * added trimming code
- *
- * Revision 1.12 1999/06/02 17:56:12 warmerda
- * added quad'' subnode support for trees
- *
- * Revision 1.11 1999/05/18 19:11:11 warmerda
- * Added example searching capability
- *
- * Revision 1.10 1999/05/18 17:49:38 warmerda
- * added initial quadtree support
- *
- * Revision 1.9 1999/05/11 03:19:28 warmerda
- * added new Tuple api, and improved extension handling - add from candrsn
- *
- * Revision 1.8 1999/03/23 17:22:27 warmerda
- * Added extern "C" protection for C++ users of shapefil.h.
- *
- * Revision 1.7 1998/12/31 15:31:07 warmerda
- * Added the TRIM_DBF_WHITESPACE and DISABLE_MULTIPATCH_MEASURE options.
- *
- * Revision 1.6 1998/12/03 15:48:15 warmerda
- * Added SHPCalculateExtents().
- *
- * Revision 1.5 1998/11/09 20:57:16 warmerda
- * Altered SHPGetInfo() call.
- *
- * Revision 1.4 1998/11/09 20:19:33 warmerda
- * Added 3D support, and use of SHPObject.
- *
- * Revision 1.3 1995/08/23 02:24:05 warmerda
- * Added support for reading bounds.
- *
- * Revision 1.2 1995/08/04 03:17:39 warmerda
- * Added header.
- *
*/
#include <stdio.h>
@@ -167,6 +130,7 @@
#ifdef USE_CPL
#include "cpl_error.h"
+#include "cpl_vsi.h"
#endif
#ifdef __cplusplus
@@ -189,7 +153,7 @@
/* is disabled. */
/* -------------------------------------------------------------------- */
#define DISABLE_MULTIPATCH_MEASURE
-
+
/* -------------------------------------------------------------------- */
/* SHPAPI_CALL */
/* */
@@ -225,21 +189,76 @@
#endif
#ifndef SHPAPI_CALL
-# define SHPAPI_CALL
+# if defined(USE_GCC_VISIBILITY_FLAG)
+# define SHPAPI_CALL __attribute__ ((visibility("default")))
+# define SHPAPI_CALL1(x) __attribute__ ((visibility("default"))) x
+# else
+# define SHPAPI_CALL
+# endif
#endif
#ifndef SHPAPI_CALL1
# define SHPAPI_CALL1(x) x SHPAPI_CALL
#endif
+/* -------------------------------------------------------------------- */
+/* Macros for controlling CVSID and ensuring they don't appear */
+/* as unreferenced variables resulting in lots of warnings. */
+/* -------------------------------------------------------------------- */
+#ifndef DISABLE_CVSID
+# define SHP_CVSID(string) static char cpl_cvsid[] = string; \
+static char *cvsid_aw() { return( cvsid_aw() ? ((char *) NULL) : cpl_cvsid ); }
+#else
+# define SHP_CVSID(string)
+#endif
+
+/* -------------------------------------------------------------------- */
+/* On some platforms, additional file IO hooks are defined that */
+/* UTF-8 encoded filenames Unicode filenames */
+/* -------------------------------------------------------------------- */
+#if defined(_WIN32) || defined(__WIN32__) || defined(WIN32)
+# define SHPAPI_WINDOWS
+# define SHPAPI_UTF8_HOOKS
+#endif
+
+/* -------------------------------------------------------------------- */
+/* IO/Error hook functions. */
+/* -------------------------------------------------------------------- */
+typedef int *SAFile;
+
+#ifndef SAOffset
+typedef unsigned long SAOffset;
+#endif
+
+typedef struct {
+ SAFile (*FOpen) ( const char *filename, const char *access);
+ SAOffset (*FRead) ( void *p, SAOffset size, SAOffset nmemb, SAFile file);
+ SAOffset (*FWrite)( void *p, SAOffset size, SAOffset nmemb, SAFile file);
+ SAOffset (*FSeek) ( SAFile file, SAOffset offset, int whence );
+ SAOffset (*FTell) ( SAFile file );
+ int (*FFlush)( SAFile file );
+ int (*FClose)( SAFile file );
+ int (*Remove) ( const char *filename );
+
+ void (*Error) ( const char *message );
+ double (*Atof) ( const char *str );
+} SAHooks;
+
+void SHPAPI_CALL SASetupDefaultHooks( SAHooks *psHooks );
+#ifdef SHPAPI_UTF8_HOOKS
+void SHPAPI_CALL SASetupUtf8Hooks( SAHooks *psHooks );
+#endif
+
/************************************************************************/
/* SHP Support. */
/************************************************************************/
typedef struct
{
- FILE *fpSHP;
- FILE *fpSHX;
+ SAHooks sHooks;
+ SAFile fpSHP;
+ SAFile fpSHX;
+
int nShapeType; /* SHPT_* */
int nFileSize; /* SHP file */
@@ -320,15 +339,26 @@
double dfYMax;
double dfZMax;
double dfMMax;
+
+ int bMeasureIsUsed;
} SHPObject;
/* -------------------------------------------------------------------- */
/* SHP API Prototypes */
/* -------------------------------------------------------------------- */
+
+/* If pszAccess is read-only, the fpSHX field of the returned structure */
+/* will be NULL as it is not necessary to keep the SHX file open */
SHPHandle SHPAPI_CALL
SHPOpen( const char * pszShapeFile, const char * pszAccess );
SHPHandle SHPAPI_CALL
+ SHPOpenLL( const char *pszShapeFile, const char *pszAccess,
+ SAHooks *psHooks );
+SHPHandle SHPAPI_CALL
SHPCreate( const char * pszShapeFile, int nShapeType );
+SHPHandle SHPAPI_CALL
+ SHPCreateLL( const char * pszShapeFile, int nShapeType,
+ SAHooks *psHooks );
void SHPAPI_CALL
SHPGetInfo( SHPHandle hSHP, int * pnEntities, int * pnShapeType,
double * padfMinBound, double * padfMaxBound );
@@ -343,13 +373,16 @@
void SHPAPI_CALL
SHPComputeExtents( SHPObject * psObject );
SHPObject SHPAPI_CALL1(*)
- SHPCreateObject( int nSHPType, int nShapeId,
- int nParts, int * panPartStart, int * panPartType,
- int nVertices, double * padfX, double * padfY,
- double * padfZ, double * padfM );
+ SHPCreateObject( int nSHPType, int nShapeId, int nParts,
+ const int * panPartStart, const int * panPartType,
+ int nVertices,
+ const double * padfX, const double * padfY,
+ const double * padfZ, const double * padfM );
SHPObject SHPAPI_CALL1(*)
SHPCreateSimpleObject( int nSHPType, int nVertices,
- double * padfX, double * padfY, double * padfZ );
+ const double * padfX,
+ const double * padfY,
+ const double * padfZ );
int SHPAPI_CALL
SHPRewindObject( SHPHandle hSHP, SHPObject * psObject );
@@ -369,6 +402,9 @@
/* this can be two or four for binary or quad tree */
#define MAX_SUBNODE 4
+/* upper limit of tree levels for automatic estimation */
+#define MAX_DEFAULT_TREE_DEPTH 12
+
typedef struct shape_tree_node
{
/* region covered by this node */
@@ -392,6 +428,7 @@
int nMaxDepth;
int nDimension;
+ int nTotalCount;
SHPTreeNode *psRoot;
} SHPTree;
@@ -425,13 +462,20 @@
int SHPAPI_CALL
SHPCheckBoundsOverlap( double *, double *, double *, double *, int );
+int SHPAPI_CALL1(*)
+SHPSearchDiskTree( FILE *fp,
+ double *padfBoundsMin, double *padfBoundsMax,
+ int *pnShapeCount );
+
/************************************************************************/
/* DBF Support. */
/************************************************************************/
typedef struct
{
- FILE *fp;
+ SAHooks sHooks;
+ SAFile fp;
+
int nRecords;
int nRecordLength;
@@ -447,9 +491,17 @@
int nCurrentRecord;
int bCurrentRecordModified;
char *pszCurrentRecord;
+
+ int nWorkFieldLength;
+ char *pszWorkField;
int bNoHeader;
int bUpdated;
+
+ double dfDoubleField;
+
+ int iLanguageDriver;
+ char *pszCodePage;
} DBFInfo;
typedef DBFInfo * DBFHandle;
@@ -464,14 +516,18 @@
#define XBASE_FLDHDR_SZ 32
-/* to hand over a locale agnostic atof function, if decimal_point != ".\0" */
-void SHPAPI_CALL
- DBFSetatof_function( double (* new_atof_function)(const char *nptr));
DBFHandle SHPAPI_CALL
DBFOpen( const char * pszDBFFile, const char * pszAccess );
DBFHandle SHPAPI_CALL
+ DBFOpenLL( const char * pszDBFFile, const char * pszAccess,
+ SAHooks *psHooks );
+DBFHandle SHPAPI_CALL
DBFCreate( const char * pszDBFFile );
+DBFHandle SHPAPI_CALL
+ DBFCreateEx( const char * pszDBFFile, const char * pszCodePage );
+DBFHandle SHPAPI_CALL
+ DBFCreateLL( const char * pszDBFFile, const char * pszCodePage, SAHooks *psHooks );
int SHPAPI_CALL
DBFGetFieldCount( DBFHandle psDBF );
@@ -481,6 +537,13 @@
DBFAddField( DBFHandle hDBF, const char * pszFieldName,
DBFFieldType eType, int nWidth, int nDecimals );
+int SHPAPI_CALL
+ DBFAddNativeFieldType( DBFHandle hDBF, const char * pszFieldName,
+ char chType, int nWidth, int nDecimals );
+
+int SHPAPI_CALL
+ DBFDeleteField( DBFHandle hDBF, int iField );
+
DBFFieldType SHPAPI_CALL
DBFGetFieldInfo( DBFHandle psDBF, int iField,
char * pszFieldName, int * pnWidth, int * pnDecimals );
@@ -522,6 +585,10 @@
int SHPAPI_CALL
DBFWriteTuple(DBFHandle psDBF, int hEntity, void * pRawTuple );
+int SHPAPI_CALL DBFIsRecordDeleted( DBFHandle psDBF, int iShape );
+int SHPAPI_CALL DBFMarkRecordDeleted( DBFHandle psDBF, int iShape,
+ int bIsDeleted );
+
DBFHandle SHPAPI_CALL
DBFCloneEmpty(DBFHandle psDBF, const char * pszFilename );
@@ -532,8 +599,11 @@
char SHPAPI_CALL
DBFGetNativeFieldType( DBFHandle hDBF, int iField );
+const char SHPAPI_CALL1(*)
+ DBFGetCodePage(DBFHandle psDBF );
+
#ifdef __cplusplus
}
#endif
-#endif /* ndef _SHAPEFILE_H_INCLUDED */
+#endif /* ndef SHAPEFILE_H_INCLUDED */
Modified: trunk/thuban/libraries/shapelib/shpopen.c
===================================================================
--- trunk/thuban/libraries/shapelib/shpopen.c 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/shapelib/shpopen.c 2009-09-27 20:36:21 UTC (rev 2889)
@@ -33,46 +33,56 @@
* DEALINGS IN THE SOFTWARE.
******************************************************************************
*
- * $Log$
- * Revision 1.3 2004/05/17 15:47:57 bh
- * Update to newest shapelib and get rid of Thuban specific extensions,
- * i.e. use the new DBFUpdateHeader instead of our DBFCommit kludge
+ * $Log: shpopen.c,v $
+ * Revision 1.59 2008-03-14 05:25:31 fwarmerdam
+ * Correct crash on buggy geometries (gdal #2218)
*
- * * libraries/shapelib/shpopen.c: Update to version from current
- * shapelib CVS.
+ * Revision 1.58 2008/01/08 23:28:26 bram
+ * on line 2095, use a float instead of a double to avoid a compiler warning
*
- * * libraries/shapelib/shapefil.h: Update to version from current
- * shapelib CVS.
+ * Revision 1.57 2007/12/06 07:00:25 fwarmerdam
+ * dbfopen now using SAHooks for fileio
*
- * * libraries/shapelib/dbfopen.c: Update to version from current
- * shapelib CVS.
- * (DBFCommit): Effectively removed since shapelib itself has
- * DBFUpdateHeader now which is better for what DBFCommit wanted to
- * achieve.
- * We're now using an unmodified version of dbfopen.
+ * Revision 1.56 2007/12/04 20:37:56 fwarmerdam
+ * preliminary implementation of hooks api for io and errors
*
- * * libraries/pyshapelib/dbflib_wrap.c, libraries/pyshapelib/dbflib.py:
- * Update from dbflib.i
+ * Revision 1.55 2007/11/21 22:39:56 fwarmerdam
+ * close shx file in readonly mode (GDAL #1956)
*
- * * libraries/pyshapelib/dbflib.i (DBFInfo_commit): New. Implementation of
- * the commit method. This new indirection is necessary because we use the
- * DBFUpdateHeader function now which is not available in shapelib <=
- * 1.2.10
- * (DBFFile::commit): Use DBFInfo_commit as implementation
- * (pragma __class__): New. Kludge to remove the commit method when
- * the DBFUpdateHeader function isn't available
- * (_have_commit): New. Helper for the pragma kludge.
+ * Revision 1.54 2007/11/15 00:12:47 mloskot
+ * Backported recent changes from GDAL (Ticket #1415) to Shapelib.
*
- * * libraries/pyshapelib/setup.py (dbf_macros): New. Return the
- * preprocessor macros needed to compile the dbflib wrapper. Determine
- * whether DBFUpdateHeader is available and define the right value of
- * HAVE_UPDATE_HEADER
- * (extensions): Use dbf_macros for the dbflibc extension
+ * Revision 1.53 2007/11/14 22:31:08 fwarmerdam
+ * checks after mallocs to detect for corrupted/voluntary broken shapefiles.
+ * http://trac.osgeo.org/gdal/ticket/1991
*
- * * setup.py (extensions): Add the HAVE_UPDATE_HEADER macro with
- * value '1' to the Lib.dbflibc extension. This simply reflects the
- * shapelib and pyshapelib updates
+ * Revision 1.52 2007/06/21 15:58:33 fwarmerdam
+ * fix for SHPRewindObject when rings touch at one vertex (gdal #976)
*
+ * Revision 1.51 2006/09/04 15:24:01 fwarmerdam
+ * Fixed up log message for 1.49.
+ *
+ * Revision 1.50 2006/09/04 15:21:39 fwarmerdam
+ * fix of last fix
+ *
+ * Revision 1.49 2006/09/04 15:21:00 fwarmerdam
+ * MLoskot: Added stronger test of Shapefile reading failures, e.g. truncated
+ * files. The problem was discovered by Tim Sutton and reported here
+ * https://svn.qgis.org/trac/ticket/200
+ *
+ * Revision 1.48 2006/01/26 15:07:32 fwarmerdam
+ * add bMeasureIsUsed flag from Craig Bruce: Bug 1249
+ *
+ * Revision 1.47 2006/01/04 20:07:23 fwarmerdam
+ * In SHPWriteObject() make sure that the record length is updated
+ * when rewriting an existing record.
+ *
+ * Revision 1.46 2005/02/11 17:17:46 fwarmerdam
+ * added panPartStart[0] validation
+ *
+ * Revision 1.45 2004/09/26 20:09:48 fwarmerdam
+ * const correctness changes
+ *
* Revision 1.44 2003/12/29 00:18:39 fwarmerdam
* added error checking for failed IO and optional CPL error reporting
*
@@ -212,9 +222,6 @@
*
*/
-static char rcsid[] =
- "$Id$";
-
#include "shapefil.h"
#include <math.h>
@@ -222,7 +229,10 @@
#include <assert.h>
#include <stdlib.h>
#include <string.h>
+#include <stdio.h>
+SHP_CVSID("$Id$")
+
typedef unsigned char uchar;
#if UINT_MAX == 65535
@@ -242,6 +252,12 @@
# define MAX(a,b) ((a>b) ? a : b)
#endif
+#if defined(WIN32) || defined(_WIN32)
+# ifndef snprintf
+# define snprintf _snprintf
+# endif
+#endif
+
static int bBigEndian;
@@ -296,6 +312,12 @@
int32 i32;
double dValue;
int32 *panSHX;
+
+ if (psSHP->fpSHX == NULL)
+ {
+ psSHP->sHooks.Error( "SHPWriteHeader failed : SHX file is closed");
+ return;
+ }
/* -------------------------------------------------------------------- */
/* Prepare header block for .shp file. */
@@ -353,13 +375,10 @@
/* -------------------------------------------------------------------- */
/* Write .shp file header. */
/* -------------------------------------------------------------------- */
- if( fseek( psSHP->fpSHP, 0, 0 ) != 0
- || fwrite( abyHeader, 100, 1, psSHP->fpSHP ) != 1 )
+ if( psSHP->sHooks.FSeek( psSHP->fpSHP, 0, 0 ) != 0
+ || psSHP->sHooks.FWrite( abyHeader, 100, 1, psSHP->fpSHP ) != 1 )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_OpenFailed,
- "Failure writing .shp header." );
-#endif
+ psSHP->sHooks.Error( "Failure writing .shp header" );
return;
}
@@ -370,13 +389,10 @@
ByteCopy( &i32, abyHeader+24, 4 );
if( !bBigEndian ) SwapWord( 4, abyHeader+24 );
- if( fseek( psSHP->fpSHX, 0, 0 ) != 0
- || fwrite( abyHeader, 100, 1, psSHP->fpSHX ) != 1 )
+ if( psSHP->sHooks.FSeek( psSHP->fpSHX, 0, 0 ) != 0
+ || psSHP->sHooks.FWrite( abyHeader, 100, 1, psSHP->fpSHX ) != 1 )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_OpenFailed,
- "Failure writing .shx header." );
-#endif
+ psSHP->sHooks.Error( "Failure writing .shx header" );
return;
}
@@ -393,13 +409,10 @@
if( !bBigEndian ) SwapWord( 4, panSHX+i*2+1 );
}
- if( fwrite( panSHX, sizeof(int32) * 2, psSHP->nRecords, psSHP->fpSHX )
+ if( (int)psSHP->sHooks.FWrite( panSHX, sizeof(int32)*2, psSHP->nRecords, psSHP->fpSHX )
!= psSHP->nRecords )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_OpenFailed,
- "Failure writing .shx contents." );
-#endif
+ psSHP->sHooks.Error( "Failure writing .shx contents" );
}
free( panSHX );
@@ -407,19 +420,34 @@
/* -------------------------------------------------------------------- */
/* Flush to disk. */
/* -------------------------------------------------------------------- */
- fflush( psSHP->fpSHP );
- fflush( psSHP->fpSHX );
+ psSHP->sHooks.FFlush( psSHP->fpSHP );
+ psSHP->sHooks.FFlush( psSHP->fpSHX );
}
/************************************************************************/
-/* shpopen() */
+/* SHPOpen() */
+/************************************************************************/
+
+SHPHandle SHPAPI_CALL
+SHPOpen( const char * pszLayer, const char * pszAccess )
+
+{
+ SAHooks sHooks;
+
+ SASetupDefaultHooks( &sHooks );
+
+ return SHPOpenLL( pszLayer, pszAccess, &sHooks );
+}
+
+/************************************************************************/
+/* SHPOpen() */
/* */
/* Open the .shp and .shx files based on the basename of the */
/* files or either file name. */
/************************************************************************/
SHPHandle SHPAPI_CALL
-SHPOpen( const char * pszLayer, const char * pszAccess )
+SHPOpenLL( const char * pszLayer, const char * pszAccess, SAHooks *psHooks )
{
char *pszFullname, *pszBasename;
@@ -455,6 +483,7 @@
psSHP = (SHPHandle) calloc(sizeof(SHPInfo),1);
psSHP->bUpdated = FALSE;
+ memcpy( &(psSHP->sHooks), psHooks, sizeof(SAHooks) );
/* -------------------------------------------------------------------- */
/* Compute the base (layer) name. If there is any extension */
@@ -475,12 +504,12 @@
/* a PC to Unix with upper case filenames won't work! */
/* -------------------------------------------------------------------- */
pszFullname = (char *) malloc(strlen(pszBasename) + 5);
- sprintf( pszFullname, "%s.shp", pszBasename );
- psSHP->fpSHP = fopen(pszFullname, pszAccess );
+ sprintf( pszFullname, "%s.shp", pszBasename ) ;
+ psSHP->fpSHP = psSHP->sHooks.FOpen(pszFullname, pszAccess );
if( psSHP->fpSHP == NULL )
{
sprintf( pszFullname, "%s.SHP", pszBasename );
- psSHP->fpSHP = fopen(pszFullname, pszAccess );
+ psSHP->fpSHP = psSHP->sHooks.FOpen(pszFullname, pszAccess );
}
if( psSHP->fpSHP == NULL )
@@ -497,11 +526,11 @@
}
sprintf( pszFullname, "%s.shx", pszBasename );
- psSHP->fpSHX = fopen(pszFullname, pszAccess );
+ psSHP->fpSHX = psSHP->sHooks.FOpen(pszFullname, pszAccess );
if( psSHP->fpSHX == NULL )
{
sprintf( pszFullname, "%s.SHX", pszBasename );
- psSHP->fpSHX = fopen(pszFullname, pszAccess );
+ psSHP->fpSHX = psSHP->sHooks.FOpen(pszFullname, pszAccess );
}
if( psSHP->fpSHX == NULL )
@@ -511,7 +540,7 @@
"Unable to open %s.shx or %s.SHX.",
pszBasename, pszBasename );
#endif
- fclose( psSHP->fpSHP );
+ psSHP->sHooks.FClose( psSHP->fpSHP );
free( psSHP );
free( pszBasename );
free( pszFullname );
@@ -525,7 +554,7 @@
/* Read the file size from the SHP file. */
/* -------------------------------------------------------------------- */
pabyBuf = (uchar *) malloc(100);
- fread( pabyBuf, 100, 1, psSHP->fpSHP );
+ psSHP->sHooks.FRead( pabyBuf, 100, 1, psSHP->fpSHP );
psSHP->nFileSize = (pabyBuf[24] * 256 * 256 * 256
+ pabyBuf[25] * 256 * 256
@@ -535,18 +564,15 @@
/* -------------------------------------------------------------------- */
/* Read SHX file Header info */
/* -------------------------------------------------------------------- */
- if( fread( pabyBuf, 100, 1, psSHP->fpSHX ) != 1
+ if( psSHP->sHooks.FRead( pabyBuf, 100, 1, psSHP->fpSHX ) != 1
|| pabyBuf[0] != 0
|| pabyBuf[1] != 0
|| pabyBuf[2] != 0x27
|| (pabyBuf[3] != 0x0a && pabyBuf[3] != 0x0d) )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_AppDefined,
- ".shx file is unreadable, or corrupt." );
-#endif
- fclose( psSHP->fpSHP );
- fclose( psSHP->fpSHX );
+ psSHP->sHooks.Error( ".shx file is unreadable, or corrupt." );
+ psSHP->sHooks.FClose( psSHP->fpSHP );
+ psSHP->sHooks.FClose( psSHP->fpSHX );
free( psSHP );
return( NULL );
@@ -560,15 +586,17 @@
if( psSHP->nRecords < 0 || psSHP->nRecords > 256000000 )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_AppDefined,
- "Record count in .shp header is %d, which seems\n"
- "unreasonable. Assuming header is corrupt.",
+ char szError[200];
+
+ sprintf( szError,
+ "Record count in .shp header is %d, which seems\n"
+ "unreasonable. Assuming header is corrupt.",
psSHP->nRecords );
-#endif
- fclose( psSHP->fpSHP );
- fclose( psSHP->fpSHX );
+ psSHP->sHooks.Error( szError );
+ psSHP->sHooks.FClose( psSHP->fpSHP );
+ psSHP->sHooks.FClose( psSHP->fpSHX );
free( psSHP );
+ free(pabyBuf);
return( NULL );
}
@@ -620,24 +648,55 @@
(int *) malloc(sizeof(int) * MAX(1,psSHP->nMaxRecords) );
psSHP->panRecSize =
(int *) malloc(sizeof(int) * MAX(1,psSHP->nMaxRecords) );
+ pabyBuf = (uchar *) malloc(8 * MAX(1,psSHP->nRecords) );
- pabyBuf = (uchar *) malloc(8 * MAX(1,psSHP->nRecords) );
- if( fread( pabyBuf, 8, psSHP->nRecords, psSHP->fpSHX ) != psSHP->nRecords )
+ if (psSHP->panRecOffset == NULL ||
+ psSHP->panRecSize == NULL ||
+ pabyBuf == NULL)
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_AppDefined,
- "Failed to read all values for %d records in .shx file.",
- psSHP->nRecords );
-#endif
+ char szError[200];
+
+ sprintf(szError,
+ "Not enough memory to allocate requested memory (nRecords=%d).\n"
+ "Probably broken SHP file",
+ psSHP->nRecords );
+ psSHP->sHooks.Error( szError );
+ psSHP->sHooks.FClose( psSHP->fpSHP );
+ psSHP->sHooks.FClose( psSHP->fpSHX );
+ if (psSHP->panRecOffset) free( psSHP->panRecOffset );
+ if (psSHP->panRecSize) free( psSHP->panRecSize );
+ if (pabyBuf) free( pabyBuf );
+ free( psSHP );
+ return( NULL );
+ }
+
+ if( (int) psSHP->sHooks.FRead( pabyBuf, 8, psSHP->nRecords, psSHP->fpSHX )
+ != psSHP->nRecords )
+ {
+ char szError[200];
+
+ sprintf( szError,
+ "Failed to read all values for %d records in .shx file.",
+ psSHP->nRecords );
+ psSHP->sHooks.Error( szError );
+
/* SHX is short or unreadable for some reason. */
- fclose( psSHP->fpSHP );
- fclose( psSHP->fpSHX );
+ psSHP->sHooks.FClose( psSHP->fpSHP );
+ psSHP->sHooks.FClose( psSHP->fpSHX );
free( psSHP->panRecOffset );
free( psSHP->panRecSize );
+ free( pabyBuf );
free( psSHP );
return( NULL );
}
+
+ /* In read-only mode, we can close the SHX now */
+ if (strcmp(pszAccess, "rb") == 0)
+ {
+ psSHP->sHooks.FClose( psSHP->fpSHX );
+ psSHP->fpSHX = NULL;
+ }
for( i = 0; i < psSHP->nRecords; i++ )
{
@@ -682,8 +741,9 @@
free( psSHP->panRecOffset );
free( psSHP->panRecSize );
- fclose( psSHP->fpSHX );
- fclose( psSHP->fpSHP );
+ if ( psSHP->fpSHX != NULL)
+ psSHP->sHooks.FClose( psSHP->fpSHX );
+ psSHP->sHooks.FClose( psSHP->fpSHP );
if( psSHP->pabyRec != NULL )
{
@@ -735,9 +795,27 @@
SHPCreate( const char * pszLayer, int nShapeType )
{
+ SAHooks sHooks;
+
+ SASetupDefaultHooks( &sHooks );
+
+ return SHPCreateLL( pszLayer, nShapeType, &sHooks );
+}
+
+/************************************************************************/
+/* SHPCreate() */
+/* */
+/* Create a new shape file and return a handle to the open */
+/* shape file with read/write access. */
+/************************************************************************/
+
+SHPHandle SHPAPI_CALL
+SHPCreateLL( const char * pszLayer, int nShapeType, SAHooks *psHooks )
+
+{
char *pszBasename, *pszFullname;
int i;
- FILE *fpSHP, *fpSHX;
+ SAFile fpSHP, fpSHX;
uchar abyHeader[100];
int32 i32;
double dValue;
@@ -770,26 +848,18 @@
/* -------------------------------------------------------------------- */
pszFullname = (char *) malloc(strlen(pszBasename) + 5);
sprintf( pszFullname, "%s.shp", pszBasename );
- fpSHP = fopen(pszFullname, "wb" );
+ fpSHP = psHooks->FOpen(pszFullname, "wb" );
if( fpSHP == NULL )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_AppDefined,
- "Failed to create file %s.",
- pszFullname );
-#endif
+ psHooks->Error( "Failed to create file .shp file." );
return( NULL );
}
sprintf( pszFullname, "%s.shx", pszBasename );
- fpSHX = fopen(pszFullname, "wb" );
+ fpSHX = psHooks->FOpen(pszFullname, "wb" );
if( fpSHX == NULL )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_AppDefined,
- "Failed to create file %s.",
- pszFullname );
-#endif
+ psHooks->Error( "Failed to create file .shx file." );
return( NULL );
}
@@ -826,12 +896,9 @@
/* -------------------------------------------------------------------- */
/* Write .shp file header. */
/* -------------------------------------------------------------------- */
- if( fwrite( abyHeader, 100, 1, fpSHP ) != 1 )
+ if( psHooks->FWrite( abyHeader, 100, 1, fpSHP ) != 1 )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_AppDefined,
- "Failed to write .shp header." );
-#endif
+ psHooks->Error( "Failed to write .shp header." );
return NULL;
}
@@ -842,22 +909,19 @@
ByteCopy( &i32, abyHeader+24, 4 );
if( !bBigEndian ) SwapWord( 4, abyHeader+24 );
- if( fwrite( abyHeader, 100, 1, fpSHX ) != 1 )
+ if( psHooks->FWrite( abyHeader, 100, 1, fpSHX ) != 1 )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_AppDefined,
- "Failed to write .shx header." );
-#endif
+ psHooks->Error( "Failed to write .shx header." );
return NULL;
}
/* -------------------------------------------------------------------- */
/* Close the files, and then open them as regular existing files. */
/* -------------------------------------------------------------------- */
- fclose( fpSHP );
- fclose( fpSHX );
+ psHooks->FClose( fpSHP );
+ psHooks->FClose( fpSHX );
- return( SHPOpen( pszLayer, "r+b" ) );
+ return( SHPOpenLL( pszLayer, "r+b", psHooks ) );
}
/************************************************************************/
@@ -931,9 +995,9 @@
SHPObject SHPAPI_CALL1(*)
SHPCreateObject( int nSHPType, int nShapeId, int nParts,
- int * panPartStart, int * panPartType,
- int nVertices, double * padfX, double * padfY,
- double * padfZ, double * padfM )
+ const int * panPartStart, const int * panPartType,
+ int nVertices, const double *padfX, const double *padfY,
+ const double * padfZ, const double * padfM )
{
SHPObject *psObject;
@@ -942,6 +1006,7 @@
psObject = (SHPObject *) calloc(1,sizeof(SHPObject));
psObject->nSHPType = nSHPType;
psObject->nShapeId = nShapeId;
+ psObject->bMeasureIsUsed = FALSE;
/* -------------------------------------------------------------------- */
/* Establish whether this shape type has M, and Z values. */
@@ -991,11 +1056,15 @@
for( i = 0; i < nParts; i++ )
{
psObject->panPartStart[i] = panPartStart[i];
+
if( panPartType != NULL )
psObject->panPartType[i] = panPartType[i];
else
psObject->panPartType[i] = SHPP_RING;
}
+
+ if( psObject->panPartStart[0] != 0 )
+ psObject->panPartStart[0] = 0;
}
/* -------------------------------------------------------------------- */
@@ -1021,6 +1090,8 @@
if( padfM != NULL && bHasM )
psObject->padfM[i] = padfM[i];
}
+ if( padfM != NULL && bHasM )
+ psObject->bMeasureIsUsed = TRUE;
}
/* -------------------------------------------------------------------- */
@@ -1041,8 +1112,8 @@
SHPObject SHPAPI_CALL1(*)
SHPCreateSimpleObject( int nSHPType, int nVertices,
- double * padfX, double * padfY,
- double * padfZ )
+ const double * padfX, const double * padfY,
+ const double * padfZ )
{
return( SHPCreateObject( nSHPType, -1, 0, NULL, NULL,
@@ -1198,13 +1269,14 @@
/*
* Write the M values, if any.
*/
- if( psObject->nSHPType == SHPT_POLYGONM
+ if( psObject->bMeasureIsUsed
+ && (psObject->nSHPType == SHPT_POLYGONM
|| psObject->nSHPType == SHPT_ARCM
#ifndef DISABLE_MULTIPATCH_MEASURE
|| psObject->nSHPType == SHPT_MULTIPATCH
#endif
|| psObject->nSHPType == SHPT_POLYGONZ
- || psObject->nSHPType == SHPT_ARCZ )
+ || psObject->nSHPType == SHPT_ARCZ) )
{
ByteCopy( &(psObject->dfMMin), pabyRec + nRecordSize, 8 );
if( bBigEndian ) SwapWord( 8, pabyRec + nRecordSize );
@@ -1269,8 +1341,9 @@
}
}
- if( psObject->nSHPType == SHPT_MULTIPOINTZ
- || psObject->nSHPType == SHPT_MULTIPOINTM )
+ if( psObject->bMeasureIsUsed
+ && (psObject->nSHPType == SHPT_MULTIPOINTZ
+ || psObject->nSHPType == SHPT_MULTIPOINTM) )
{
ByteCopy( &(psObject->dfMMin), pabyRec + nRecordSize, 8 );
if( bBigEndian ) SwapWord( 8, pabyRec + nRecordSize );
@@ -1311,8 +1384,9 @@
nRecordSize += 8;
}
- if( psObject->nSHPType == SHPT_POINTZ
- || psObject->nSHPType == SHPT_POINTM )
+ if( psObject->bMeasureIsUsed
+ && (psObject->nSHPType == SHPT_POINTZ
+ || psObject->nSHPType == SHPT_POINTM) )
{
ByteCopy( psObject->padfM, pabyRec + nRecordSize, 8 );
if( bBigEndian ) SwapWord( 8, pabyRec + nRecordSize );
@@ -1351,6 +1425,7 @@
else
{
nRecordOffset = psSHP->panRecOffset[nShapeId];
+ psSHP->panRecSize[nShapeId] = nRecordSize-8;
}
/* -------------------------------------------------------------------- */
@@ -1371,13 +1446,10 @@
/* -------------------------------------------------------------------- */
/* Write out record. */
/* -------------------------------------------------------------------- */
- if( fseek( psSHP->fpSHP, nRecordOffset, 0 ) != 0
- || fwrite( pabyRec, nRecordSize, 1, psSHP->fpSHP ) < 1 )
+ if( psSHP->sHooks.FSeek( psSHP->fpSHP, nRecordOffset, 0 ) != 0
+ || psSHP->sHooks.FWrite( pabyRec, nRecordSize, 1, psSHP->fpSHP ) < 1 )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_FileIO,
- "Error in fseek() or fwrite() writing object to .shp file." );
-#endif
+ psSHP->sHooks.Error( "Error in psSHP->sHooks.FSeek() or fwrite() writing object to .shp file." );
free( pabyRec );
return -1;
}
@@ -1434,7 +1506,9 @@
SHPReadObject( SHPHandle psSHP, int hEntity )
{
+ int nEntitySize, nRequiredSize;
SHPObject *psShape;
+ char pszErrorMsg[128];
/* -------------------------------------------------------------------- */
/* Validate the record/entity number. */
@@ -1445,23 +1519,47 @@
/* -------------------------------------------------------------------- */
/* Ensure our record buffer is large enough. */
/* -------------------------------------------------------------------- */
- if( psSHP->panRecSize[hEntity]+8 > psSHP->nBufSize )
+ nEntitySize = psSHP->panRecSize[hEntity]+8;
+ if( nEntitySize > psSHP->nBufSize )
{
- psSHP->nBufSize = psSHP->panRecSize[hEntity]+8;
- psSHP->pabyRec = (uchar *) SfRealloc(psSHP->pabyRec,psSHP->nBufSize);
+ psSHP->pabyRec = (uchar *) SfRealloc(psSHP->pabyRec,nEntitySize);
+ if (psSHP->pabyRec == NULL)
+ {
+ char szError[200];
+
+ /* Reallocate previous successfull size for following features */
+ psSHP->pabyRec = malloc(psSHP->nBufSize);
+
+ sprintf( szError,
+ "Not enough memory to allocate requested memory (nBufSize=%d). "
+ "Probably broken SHP file", psSHP->nBufSize );
+ psSHP->sHooks.Error( szError );
+ return NULL;
+ }
+
+ /* Only set new buffer size after successfull alloc */
+ psSHP->nBufSize = nEntitySize;
}
+ /* In case we were not able to reallocate the buffer on a previous step */
+ if (psSHP->pabyRec == NULL)
+ {
+ return NULL;
+ }
+
/* -------------------------------------------------------------------- */
/* Read the record. */
/* -------------------------------------------------------------------- */
- if( fseek( psSHP->fpSHP, psSHP->panRecOffset[hEntity], 0 ) != 0
- || fread( psSHP->pabyRec, psSHP->panRecSize[hEntity]+8, 1,
+ if( psSHP->sHooks.FSeek( psSHP->fpSHP, psSHP->panRecOffset[hEntity], 0 ) != 0
+ || psSHP->sHooks.FRead( psSHP->pabyRec, nEntitySize, 1,
psSHP->fpSHP ) != 1 )
{
-#ifdef USE_CPL
- CPLError( CE_Failure, CPLE_FileIO,
- "Error in fseek() or fread() reading object from .shp file." );
-#endif
+ /*
+ * TODO - mloskot: Consider detailed diagnostics of shape file,
+ * for example to detect if file is truncated.
+ */
+
+ psSHP->sHooks.Error( "Error in fseek() or fread() reading object from .shp file." );
return NULL;
}
@@ -1470,8 +1568,18 @@
/* -------------------------------------------------------------------- */
psShape = (SHPObject *) calloc(1,sizeof(SHPObject));
psShape->nShapeId = hEntity;
+ psShape->bMeasureIsUsed = FALSE;
+ if ( 8 + 4 > nEntitySize )
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d : nEntitySize = %d",
+ hEntity, nEntitySize);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
memcpy( &psShape->nSHPType, psSHP->pabyRec + 8, 4 );
+
if( bBigEndian ) SwapWord( 4, &(psShape->nSHPType) );
/* ==================================================================== */
@@ -1487,6 +1595,14 @@
int32 nPoints, nParts;
int i, nOffset;
+ if ( 40 + 8 + 4 > nEntitySize )
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d : nEntitySize = %d",
+ hEntity, nEntitySize);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
/* -------------------------------------------------------------------- */
/* Get the X/Y bounds. */
/* -------------------------------------------------------------------- */
@@ -1510,6 +1626,39 @@
if( bBigEndian ) SwapWord( 4, &nPoints );
if( bBigEndian ) SwapWord( 4, &nParts );
+ if (nPoints < 0 || nParts < 0 ||
+ nPoints > 50 * 1000 * 1000 || nParts > 10 * 1000 * 1000)
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d, nPoints=%d, nParts=%d.",
+ hEntity, nPoints, nParts);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
+
+ /* With the previous checks on nPoints and nParts, */
+ /* we should not overflow here and after */
+ /* since 50 M * (16 + 8 + 8) = 1 600 MB */
+ nRequiredSize = 44 + 8 + 4 * nParts + 16 * nPoints;
+ if ( psShape->nSHPType == SHPT_POLYGONZ
+ || psShape->nSHPType == SHPT_ARCZ
+ || psShape->nSHPType == SHPT_MULTIPATCH )
+ {
+ nRequiredSize += 16 + 8 * nPoints;
+ }
+ if( psShape->nSHPType == SHPT_MULTIPATCH )
+ {
+ nRequiredSize += 4 * nParts;
+ }
+ if (nRequiredSize > nEntitySize)
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d, nPoints=%d, nParts=%d, nEntitySize=%d.",
+ hEntity, nPoints, nParts, nEntitySize);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
+
psShape->nVertices = nPoints;
psShape->padfX = (double *) calloc(nPoints,sizeof(double));
psShape->padfY = (double *) calloc(nPoints,sizeof(double));
@@ -1519,6 +1668,21 @@
psShape->nParts = nParts;
psShape->panPartStart = (int *) calloc(nParts,sizeof(int));
psShape->panPartType = (int *) calloc(nParts,sizeof(int));
+
+ if (psShape->padfX == NULL ||
+ psShape->padfY == NULL ||
+ psShape->padfZ == NULL ||
+ psShape->padfM == NULL ||
+ psShape->panPartStart == NULL ||
+ psShape->panPartType == NULL)
+ {
+ snprintf(pszErrorMsg, 128,
+ "Not enough memory to allocate requested memory (nPoints=%d, nParts=%d) for shape %d. "
+ "Probably broken SHP file", hEntity, nPoints, nParts );
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
for( i = 0; i < nParts; i++ )
psShape->panPartType[i] = SHPP_RING;
@@ -1530,6 +1694,25 @@
for( i = 0; i < nParts; i++ )
{
if( bBigEndian ) SwapWord( 4, psShape->panPartStart+i );
+
+ /* We check that the offset is inside the vertex array */
+ if (psShape->panPartStart[i] < 0 ||
+ psShape->panPartStart[i] >= psShape->nVertices)
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d : panPartStart[%d] = %d, nVertices = %d",
+ hEntity, i, psShape->panPartStart[i], psShape->nVertices);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
+ if (i > 0 && psShape->panPartStart[i] <= psShape->panPartStart[i-1])
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d : panPartStart[%d] = %d, panPartStart[%d] = %d",
+ hEntity, i, psShape->panPartStart[i], i - 1, psShape->panPartStart[i - 1]);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
}
nOffset = 44 + 8 + 4*nParts;
@@ -1596,7 +1779,7 @@
/* big enough, but really it will only occur for the Z shapes */
/* (options), and the M shapes. */
/* -------------------------------------------------------------------- */
- if( psSHP->panRecSize[hEntity]+8 >= nOffset + 16 + 8*nPoints )
+ if( nEntitySize >= nOffset + 16 + 8*nPoints )
{
memcpy( &(psShape->dfMMin), psSHP->pabyRec + nOffset, 8 );
memcpy( &(psShape->dfMMax), psSHP->pabyRec + nOffset + 8, 8 );
@@ -1610,8 +1793,8 @@
psSHP->pabyRec + nOffset + 16 + i*8, 8 );
if( bBigEndian ) SwapWord( 8, psShape->padfM + i );
}
+ psShape->bMeasureIsUsed = TRUE;
}
-
}
/* ==================================================================== */
@@ -1624,15 +1807,60 @@
int32 nPoints;
int i, nOffset;
+ if ( 44 + 4 > nEntitySize )
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d : nEntitySize = %d",
+ hEntity, nEntitySize);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
memcpy( &nPoints, psSHP->pabyRec + 44, 4 );
+
if( bBigEndian ) SwapWord( 4, &nPoints );
+ if (nPoints < 0 || nPoints > 50 * 1000 * 1000)
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d : nPoints = %d",
+ hEntity, nPoints);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
+
+ nRequiredSize = 48 + nPoints * 16;
+ if( psShape->nSHPType == SHPT_MULTIPOINTZ )
+ {
+ nRequiredSize += 16 + nPoints * 8;
+ }
+ if (nRequiredSize > nEntitySize)
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d : nPoints = %d, nEntitySize = %d",
+ hEntity, nPoints, nEntitySize);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
+
psShape->nVertices = nPoints;
psShape->padfX = (double *) calloc(nPoints,sizeof(double));
psShape->padfY = (double *) calloc(nPoints,sizeof(double));
psShape->padfZ = (double *) calloc(nPoints,sizeof(double));
psShape->padfM = (double *) calloc(nPoints,sizeof(double));
+ if (psShape->padfX == NULL ||
+ psShape->padfY == NULL ||
+ psShape->padfZ == NULL ||
+ psShape->padfM == NULL)
+ {
+ snprintf(pszErrorMsg, 128,
+ "Not enough memory to allocate requested memory (nPoints=%d) for shape %d. "
+ "Probably broken SHP file", hEntity, nPoints );
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
+
for( i = 0; i < nPoints; i++ )
{
memcpy(psShape->padfX+i, psSHP->pabyRec + 48 + 16 * i, 8 );
@@ -1684,7 +1912,7 @@
/* big enough, but really it will only occur for the Z shapes */
/* (options), and the M shapes. */
/* -------------------------------------------------------------------- */
- if( psSHP->panRecSize[hEntity]+8 >= nOffset + 16 + 8*nPoints )
+ if( nEntitySize >= nOffset + 16 + 8*nPoints )
{
memcpy( &(psShape->dfMMin), psSHP->pabyRec + nOffset, 8 );
memcpy( &(psShape->dfMMax), psSHP->pabyRec + nOffset + 8, 8 );
@@ -1698,6 +1926,7 @@
psSHP->pabyRec + nOffset + 16 + i*8, 8 );
if( bBigEndian ) SwapWord( 8, psShape->padfM + i );
}
+ psShape->bMeasureIsUsed = TRUE;
}
}
@@ -1716,6 +1945,14 @@
psShape->padfZ = (double *) calloc(1,sizeof(double));
psShape->padfM = (double *) calloc(1,sizeof(double));
+ if (20 + 8 + (( psShape->nSHPType == SHPT_POINTZ ) ? 8 : 0)> nEntitySize)
+ {
+ snprintf(pszErrorMsg, 128, "Corrupted .shp file : shape %d : nEntitySize = %d",
+ hEntity, nEntitySize);
+ psSHP->sHooks.Error( pszErrorMsg );
+ SHPDestroyObject(psShape);
+ return NULL;
+ }
memcpy( psShape->padfX, psSHP->pabyRec + 12, 8 );
memcpy( psShape->padfY, psSHP->pabyRec + 20, 8 );
@@ -1742,11 +1979,12 @@
/* big enough, but really it will only occur for the Z shapes */
/* (options), and the M shapes. */
/* -------------------------------------------------------------------- */
- if( psSHP->panRecSize[hEntity]+8 >= nOffset + 8 )
+ if( nEntitySize >= nOffset + 8 )
{
memcpy( psShape->padfM, psSHP->pabyRec + nOffset, 8 );
if( bBigEndian ) SwapWord( 8, psShape->padfM );
+ psShape->bMeasureIsUsed = TRUE;
}
/* -------------------------------------------------------------------- */
@@ -1918,10 +2156,17 @@
/* first ring is outer and all others are inner, but eventually */
/* we need to fix this to handle multiple island polygons and */
/* unordered sets of rings. */
+/* */
/* -------------------------------------------------------------------- */
- dfTestX = psObject->padfX[psObject->panPartStart[iOpRing]];
- dfTestY = psObject->padfY[psObject->panPartStart[iOpRing]];
+ /* Use point in the middle of segment to avoid testing
+ * common points of rings.
+ */
+ dfTestX = ( psObject->padfX[psObject->panPartStart[iOpRing]]
+ + psObject->padfX[psObject->panPartStart[iOpRing] + 1] ) / 2;
+ dfTestY = ( psObject->padfY[psObject->panPartStart[iOpRing]]
+ + psObject->padfY[psObject->panPartStart[iOpRing] + 1] ) / 2;
+
bInner = FALSE;
for( iCheckRing = 0; iCheckRing < psObject->nParts; iCheckRing++ )
{
@@ -1948,21 +2193,31 @@
else
iNext = 0;
- if( (psObject->padfY[iEdge+nVertStart] < dfTestY
- && psObject->padfY[iNext+nVertStart] >= dfTestY)
- || (psObject->padfY[iNext+nVertStart] < dfTestY
- && psObject->padfY[iEdge+nVertStart] >= dfTestY) )
+ /* Rule #1:
+ * Test whether the edge 'straddles' the horizontal ray from the test point (dfTestY,dfTestY)
+ * The rule #1 also excludes edges collinear with the ray.
+ */
+ if ( ( psObject->padfY[iEdge+nVertStart] < dfTestY
+ && dfTestY <= psObject->padfY[iNext+nVertStart] )
+ || ( psObject->padfY[iNext+nVertStart] < dfTestY
+ && dfTestY <= psObject->padfY[iEdge+nVertStart] ) )
{
- if( psObject->padfX[iEdge+nVertStart]
- + (dfTestY - psObject->padfY[iEdge+nVertStart])
- / (psObject->padfY[iNext+nVertStart]
- - psObject->padfY[iEdge+nVertStart])
- * (psObject->padfX[iNext+nVertStart]
- - psObject->padfX[iEdge+nVertStart]) < dfTestX )
+ /* Rule #2:
+ * Test if edge-ray intersection is on the right from the test point (dfTestY,dfTestY)
+ */
+ double const intersect =
+ ( psObject->padfX[iEdge+nVertStart]
+ + ( dfTestY - psObject->padfY[iEdge+nVertStart] )
+ / ( psObject->padfY[iNext+nVertStart] - psObject->padfY[iEdge+nVertStart] )
+ * ( psObject->padfX[iNext+nVertStart] - psObject->padfX[iEdge+nVertStart] ) );
+
+ if (intersect < dfTestX)
+ {
bInner = !bInner;
- }
+ }
+ }
}
- }
+ } /* for iCheckRing */
/* -------------------------------------------------------------------- */
/* Determine the current order of this ring so we will know if */
Modified: trunk/thuban/libraries/shapelib/shptree.c
===================================================================
--- trunk/thuban/libraries/shapelib/shptree.c 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/libraries/shapelib/shptree.c 2009-09-27 20:36:21 UTC (rev 2889)
@@ -33,10 +33,16 @@
* DEALINGS IN THE SOFTWARE.
******************************************************************************
*
- * $Log$
- * Revision 1.2 2003/10/02 15:15:16 bh
- * Update to shapelib 1.2.10
+ * $Log: shptree.c,v $
+ * Revision 1.12 2008-11-12 15:39:50 fwarmerdam
+ * improve safety in face of buggy .shp file.
*
+ * Revision 1.11 2007/10/27 03:31:14 fwarmerdam
+ * limit default depth of tree to 12 levels (gdal ticket #1594)
+ *
+ * Revision 1.10 2005/01/03 22:30:13 fwarmerdam
+ * added support for saved quadtrees
+ *
* Revision 1.9 2003/01/28 15:53:41 warmerda
* Avoid build warnings.
*
@@ -66,22 +72,26 @@
*
*/
-static char rcsid[] =
- "$Id$";
-
#include "shapefil.h"
#include <math.h>
#include <assert.h>
#include <stdlib.h>
#include <string.h>
+#ifdef USE_CPL
+#include <cpl_error.h>
+#endif
+SHP_CVSID("$Id$")
+
#ifndef TRUE
# define TRUE 1
# define FALSE 0
#endif
+static int bBigEndian = 0;
+
/* -------------------------------------------------------------------- */
/* If the following is 0.5, nodes will be split in half. If it */
/* is 0.6 then each subnode will contain 60% of the parent */
@@ -121,6 +131,13 @@
SHPTreeNode *psTreeNode;
psTreeNode = (SHPTreeNode *) malloc(sizeof(SHPTreeNode));
+ if( NULL == psTreeNode )
+ {
+#ifdef USE_CPL
+ CPLError( CE_Fatal, CPLE_OutOfMemory, "Memory allocation failure");
+#endif
+ return NULL;
+ }
psTreeNode->nShapeCount = 0;
psTreeNode->panShapeIds = NULL;
@@ -156,10 +173,18 @@
/* Allocate the tree object */
/* -------------------------------------------------------------------- */
psTree = (SHPTree *) malloc(sizeof(SHPTree));
+ if( NULL == psTree )
+ {
+#ifdef USE_CPL
+ CPLError( CE_Fatal, CPLE_OutOfMemory, "Memory allocation failure");
+#endif
+ return NULL;
+ }
psTree->hSHP = hSHP;
psTree->nMaxDepth = nMaxDepth;
psTree->nDimension = nDimension;
+ psTree->nTotalCount = 0;
/* -------------------------------------------------------------------- */
/* If no max depth was defined, try to select a reasonable one */
@@ -176,19 +201,47 @@
psTree->nMaxDepth += 1;
nMaxNodeCount = nMaxNodeCount * 2;
}
+
+#ifdef USE_CPL
+ CPLDebug( "Shape",
+ "Estimated spatial index tree depth: %d",
+ psTree->nMaxDepth );
+#endif
+
+ /* NOTE: Due to problems with memory allocation for deep trees,
+ * automatically estimated depth is limited up to 12 levels.
+ * See Ticket #1594 for detailed discussion.
+ */
+ if( psTree->nMaxDepth > MAX_DEFAULT_TREE_DEPTH )
+ {
+ psTree->nMaxDepth = MAX_DEFAULT_TREE_DEPTH;
+
+#ifdef USE_CPL
+ CPLDebug( "Shape",
+ "Falling back to max number of allowed index tree levels (%d).",
+ MAX_DEFAULT_TREE_DEPTH );
+#endif
+ }
}
/* -------------------------------------------------------------------- */
/* Allocate the root node. */
/* -------------------------------------------------------------------- */
psTree->psRoot = SHPTreeNodeCreate( padfBoundsMin, padfBoundsMax );
+ if( NULL == psTree->psRoot )
+ {
+ return NULL;
+ }
/* -------------------------------------------------------------------- */
/* Assign the bounds to the root node. If none are passed in, */
/* use the bounds of the provided file otherwise the create */
/* function will have already set the bounds. */
/* -------------------------------------------------------------------- */
- if( padfBoundsMin == NULL )
+ assert( NULL != psTree );
+ assert( NULL != psTree->psRoot );
+
+ if( padfBoundsMin == NULL )
{
SHPGetInfo( hSHP, NULL, NULL,
psTree->psRoot->adfBoundsMin,
@@ -209,8 +262,11 @@
SHPObject *psShape;
psShape = SHPReadObject( hSHP, iShape );
- SHPTreeAddShapeId( psTree, psShape );
- SHPDestroyObject( psShape );
+ if( psShape != NULL )
+ {
+ SHPTreeAddShapeId( psTree, psShape );
+ SHPDestroyObject( psShape );
+ }
}
}
@@ -226,6 +282,8 @@
{
int i;
+ assert( NULL != psTreeNode );
+
for( i = 0; i < psTreeNode->nSubNodes; i++ )
{
if( psTreeNode->apsSubNode[i] != NULL )
@@ -497,14 +555,14 @@
/* -------------------------------------------------------------------- */
psTreeNode->nShapeCount++;
- psTreeNode->panShapeIds =
+ psTreeNode->panShapeIds = (int *)
SfRealloc( psTreeNode->panShapeIds,
sizeof(int) * psTreeNode->nShapeCount );
psTreeNode->panShapeIds[psTreeNode->nShapeCount-1] = psObject->nShapeId;
if( psTreeNode->papsShapeObj != NULL )
{
- psTreeNode->papsShapeObj =
+ psTreeNode->papsShapeObj = (SHPObject **)
SfRealloc( psTreeNode->papsShapeObj,
sizeof(void *) * psTreeNode->nShapeCount );
psTreeNode->papsShapeObj[psTreeNode->nShapeCount-1] = NULL;
@@ -524,6 +582,8 @@
SHPTreeAddShapeId( SHPTree * psTree, SHPObject * psObject )
{
+ psTree->nTotalCount++;
+
return( SHPTreeNodeAddShapeId( psTree->psRoot, psObject,
psTree->nMaxDepth, psTree->nDimension ) );
}
@@ -680,3 +740,306 @@
SHPTreeNodeTrim( hTree->psRoot );
}
+/************************************************************************/
+/* SwapWord() */
+/* */
+/* Swap a 2, 4 or 8 byte word. */
+/************************************************************************/
+
+static void SwapWord( int length, void * wordP )
+
+{
+ int i;
+ unsigned char temp;
+
+ for( i=0; i < length/2; i++ )
+ {
+ temp = ((unsigned char *) wordP)[i];
+ ((unsigned char *)wordP)[i] = ((unsigned char *) wordP)[length-i-1];
+ ((unsigned char *) wordP)[length-i-1] = temp;
+ }
+}
+
+/************************************************************************/
+/* SHPSearchDiskTreeNode() */
+/************************************************************************/
+
+static int
+SHPSearchDiskTreeNode( FILE *fp, double *padfBoundsMin, double *padfBoundsMax,
+ int **ppanResultBuffer, int *pnBufferMax,
+ int *pnResultCount, int bNeedSwap )
+
+{
+ int i;
+ int offset;
+ int numshapes, numsubnodes;
+ double adfNodeBoundsMin[2], adfNodeBoundsMax[2];
+
+/* -------------------------------------------------------------------- */
+/* Read and unswap first part of node info. */
+/* -------------------------------------------------------------------- */
+ fread( &offset, 4, 1, fp );
+ if ( bNeedSwap ) SwapWord ( 4, &offset );
+
+ fread( adfNodeBoundsMin, sizeof(double), 2, fp );
+ fread( adfNodeBoundsMax, sizeof(double), 2, fp );
+ if ( bNeedSwap )
+ {
+ SwapWord( 8, adfNodeBoundsMin + 0 );
+ SwapWord( 8, adfNodeBoundsMin + 1 );
+ SwapWord( 8, adfNodeBoundsMax + 0 );
+ SwapWord( 8, adfNodeBoundsMax + 1 );
+ }
+
+ fread( &numshapes, 4, 1, fp );
+ if ( bNeedSwap ) SwapWord ( 4, &numshapes );
+
+/* -------------------------------------------------------------------- */
+/* If we don't overlap this node at all, we can just fseek() */
+/* pass this node info and all subnodes. */
+/* -------------------------------------------------------------------- */
+ if( !SHPCheckBoundsOverlap( adfNodeBoundsMin, adfNodeBoundsMax,
+ padfBoundsMin, padfBoundsMax, 2 ) )
+ {
+ offset += numshapes*sizeof(int) + sizeof(int);
+ fseek(fp, offset, SEEK_CUR);
+ return TRUE;
+ }
+
+/* -------------------------------------------------------------------- */
+/* Add all the shapeids at this node to our list. */
+/* -------------------------------------------------------------------- */
+ if(numshapes > 0)
+ {
+ if( *pnResultCount + numshapes > *pnBufferMax )
+ {
+ *pnBufferMax = (int) ((*pnResultCount + numshapes + 100) * 1.25);
+ *ppanResultBuffer = (int *)
+ SfRealloc( *ppanResultBuffer, *pnBufferMax * sizeof(int) );
+ }
+
+ fread( *ppanResultBuffer + *pnResultCount,
+ sizeof(int), numshapes, fp );
+
+ if (bNeedSwap )
+ {
+ for( i=0; i<numshapes; i++ )
+ SwapWord( 4, *ppanResultBuffer + *pnResultCount + i );
+ }
+
+ *pnResultCount += numshapes;
+ }
+
+/* -------------------------------------------------------------------- */
+/* Process the subnodes. */
+/* -------------------------------------------------------------------- */
+ fread( &numsubnodes, 4, 1, fp );
+ if ( bNeedSwap ) SwapWord ( 4, &numsubnodes );
+
+ for(i=0; i<numsubnodes; i++)
+ {
+ if( !SHPSearchDiskTreeNode( fp, padfBoundsMin, padfBoundsMax,
+ ppanResultBuffer, pnBufferMax,
+ pnResultCount, bNeedSwap ) )
+ return FALSE;
+ }
+
+ return TRUE;
+}
+
+/************************************************************************/
+/* SHPSearchDiskTree() */
+/************************************************************************/
+
+int SHPAPI_CALL1(*)
+SHPSearchDiskTree( FILE *fp,
+ double *padfBoundsMin, double *padfBoundsMax,
+ int *pnShapeCount )
+
+{
+ int i, bNeedSwap, nBufferMax = 0;
+ unsigned char abyBuf[16];
+ int *panResultBuffer = NULL;
+
+ *pnShapeCount = 0;
+
+/* -------------------------------------------------------------------- */
+/* Establish the byte order on this machine. */
+/* -------------------------------------------------------------------- */
+ i = 1;
+ if( *((unsigned char *) &i) == 1 )
+ bBigEndian = FALSE;
+ else
+ bBigEndian = TRUE;
+
+/* -------------------------------------------------------------------- */
+/* Read the header. */
+/* -------------------------------------------------------------------- */
+ fseek( fp, 0, SEEK_SET );
+ fread( abyBuf, 16, 1, fp );
+
+ if( memcmp( abyBuf, "SQT", 3 ) != 0 )
+ return NULL;
+
+ if( (abyBuf[3] == 2 && bBigEndian)
+ || (abyBuf[3] == 1 && !bBigEndian) )
+ bNeedSwap = FALSE;
+ else
+ bNeedSwap = TRUE;
+
+/* -------------------------------------------------------------------- */
+/* Search through root node and it's decendents. */
+/* -------------------------------------------------------------------- */
+ if( !SHPSearchDiskTreeNode( fp, padfBoundsMin, padfBoundsMax,
+ &panResultBuffer, &nBufferMax,
+ pnShapeCount, bNeedSwap ) )
+ {
+ if( panResultBuffer != NULL )
+ free( panResultBuffer );
+ *pnShapeCount = 0;
+ return NULL;
+ }
+/* -------------------------------------------------------------------- */
+/* Sort the id array */
+/* -------------------------------------------------------------------- */
+ qsort(panResultBuffer, *pnShapeCount, sizeof(int), compare_ints);
+
+ return panResultBuffer;
+}
+
+/************************************************************************/
+/* SHPGetSubNodeOffset() */
+/* */
+/* Determine how big all the subnodes of this node (and their */
+/* children) will be. This will allow disk based searchers to */
+/* seek past them all efficiently. */
+/************************************************************************/
+
+static int SHPGetSubNodeOffset( SHPTreeNode *node)
+{
+ int i;
+ long offset=0;
+
+ for(i=0; i<node->nSubNodes; i++ )
+ {
+ if(node->apsSubNode[i])
+ {
+ offset += 4*sizeof(double)
+ + (node->apsSubNode[i]->nShapeCount+3)*sizeof(int);
+ offset += SHPGetSubNodeOffset(node->apsSubNode[i]);
+ }
+ }
+
+ return(offset);
+}
+
+/************************************************************************/
+/* SHPWriteTreeNode() */
+/************************************************************************/
+
+static void SHPWriteTreeNode( FILE *fp, SHPTreeNode *node)
+{
+ int i,j;
+ int offset;
+ unsigned char *pabyRec = NULL;
+ assert( NULL != node );
+
+ offset = SHPGetSubNodeOffset(node);
+
+ pabyRec = (unsigned char *)
+ malloc(sizeof(double) * 4
+ + (3 * sizeof(int)) + (node->nShapeCount * sizeof(int)) );
+ if( NULL == pabyRec )
+ {
+#ifdef USE_CPL
+ CPLError( CE_Fatal, CPLE_OutOfMemory, "Memory allocation failure");
+#endif
+ assert( 0 );
+ }
+ assert( NULL != pabyRec );
+
+ memcpy( pabyRec, &offset, 4);
+
+ /* minx, miny, maxx, maxy */
+ memcpy( pabyRec+ 4, node->adfBoundsMin+0, sizeof(double) );
+ memcpy( pabyRec+12, node->adfBoundsMin+1, sizeof(double) );
+ memcpy( pabyRec+20, node->adfBoundsMax+0, sizeof(double) );
+ memcpy( pabyRec+28, node->adfBoundsMax+1, sizeof(double) );
+
+ memcpy( pabyRec+36, &node->nShapeCount, 4);
+ j = node->nShapeCount * sizeof(int);
+ memcpy( pabyRec+40, node->panShapeIds, j);
+ memcpy( pabyRec+j+40, &node->nSubNodes, 4);
+
+ fwrite( pabyRec, 44+j, 1, fp );
+ free (pabyRec);
+
+ for(i=0; i<node->nSubNodes; i++ )
+ {
+ if(node->apsSubNode[i])
+ SHPWriteTreeNode( fp, node->apsSubNode[i]);
+ }
+}
+
+/************************************************************************/
+/* SHPWriteTree() */
+/************************************************************************/
+
+int SHPWriteTree(SHPTree *tree, const char *filename )
+{
+ char signature[4] = "SQT";
+ int i;
+ char abyBuf[32];
+ FILE *fp;
+
+/* -------------------------------------------------------------------- */
+/* Open the output file. */
+/* -------------------------------------------------------------------- */
+ fp = fopen(filename, "wb");
+ if( fp == NULL )
+ {
+ return FALSE;
+ }
+
+/* -------------------------------------------------------------------- */
+/* Establish the byte order on this machine. */
+/* -------------------------------------------------------------------- */
+ i = 1;
+ if( *((unsigned char *) &i) == 1 )
+ bBigEndian = FALSE;
+ else
+ bBigEndian = TRUE;
+
+/* -------------------------------------------------------------------- */
+/* Write the header. */
+/* -------------------------------------------------------------------- */
+ memcpy( abyBuf+0, signature, 3 );
+
+ if( bBigEndian )
+ abyBuf[3] = 2; /* New MSB */
+ else
+ abyBuf[3] = 1; /* New LSB */
+
+ abyBuf[4] = 1; /* version */
+ abyBuf[5] = 0; /* next 3 reserved */
+ abyBuf[6] = 0;
+ abyBuf[7] = 0;
+
+ fwrite( abyBuf, 8, 1, fp );
+
+ fwrite( &(tree->nTotalCount), 4, 1, fp );
+
+ /* write maxdepth */
+
+ fwrite( &(tree->nMaxDepth), 4, 1, fp );
+
+/* -------------------------------------------------------------------- */
+/* Write all the nodes "in order". */
+/* -------------------------------------------------------------------- */
+
+ SHPWriteTreeNode( fp, tree->psRoot );
+
+ fclose( fp );
+
+ return TRUE;
+}
Modified: trunk/thuban/setup.py
===================================================================
--- trunk/thuban/setup.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/setup.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -269,22 +269,47 @@
# shapelib wrappers are also distributed with thuban
#
-extensions.append(Extension("Lib.shapelibc",
- [ext_dir + "/pyshapelib/shapelib_wrap.c",
+def determine_shapelib_macros():
+ f = open(convert_path(shp_dir + "/shapefil.h"))
+ contents = f.read()
+ f.close()
+
+ def have(keyword):
+ if keyword in contents:
+ return "1"
+ return "0"
+
+ return [
+ ("HAVE_UPDATE_HEADER", have("DBFUpdateHeader")),
+ ("HAVE_CODE_PAGE", have("DBFGetCodePage")),
+ ("DISABLE_CVSID", "1")]
+
+def search_sahooks_files():
+ candidates = [shp_dir + "/safileio.c"]
+ return filter(os.path.exists, candidates)
+
+shapelib_macros = determine_shapelib_macros()
+sahooks_files = search_sahooks_files()
+
+extensions.append(Extension("Lib.shapelib",
+ [ext_dir + "/pyshapelib/shapelibmodule.c",
shp_dir + "/shpopen.c",
- shp_dir + "/shptree.c"],
- include_dirs = [shp_dir]))
+ shp_dir + "/shptree.c"]
+ + sahooks_files,
+ include_dirs = [shp_dir],
+ define_macros = shapelib_macros))
extensions.append(Extension("Lib.shptree",
[ext_dir + "/pyshapelib/shptreemodule.c"],
- include_dirs = [shp_dir]))
-extensions.append(Extension("Lib.dbflibc",
- [ext_dir + "/pyshapelib/dbflib_wrap.c",
- shp_dir + "/dbfopen.c"],
include_dirs = [shp_dir],
- define_macros = [("HAVE_UPDATE_HEADER", "1")]))
-for name in ("shapelib", "dbflib"):
- py_modules.append(ext_dir + "/pyshapelib/" + name)
+ define_macros = shapelib_macros))
+extensions.append(Extension("Lib.dbflib",
+ [ext_dir + "/pyshapelib/dbflibmodule.c",
+ shp_dir + "/dbfopen.c"]
+ + sahooks_files,
+ include_dirs = [shp_dir],
+ define_macros = shapelib_macros))
+
#
# PROJ4 bindings are also distributed with thuban
#
Modified: trunk/thuban/test/test_load.py
===================================================================
--- trunk/thuban/test/test_load.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/test/test_load.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -318,7 +318,7 @@
dbffile = self.temp_file_name("TestNonAsciiColumnName.dbf")
shpfile = self.temp_file_name("TestNonAsciiColumnName.shp")
dbf = dbflib.create(dbffile)
- dbf.add_field('Fl\xe4che', dbflib.FTDouble, 10, 5)
+ dbf.add_field(u'Fl\xe4che', dbflib.FTDouble, 10, 5)
dbf.write_record(0, (0.0,))
dbf.close()
shp = shapelib.create(shpfile, shapelib.SHPT_POLYGON)
@@ -348,7 +348,8 @@
# too
layer = session.Maps()[0].Layers()[0]
try:
- self.assertEquals(layer.GetClassificationColumn(), 'Fl\xe4che')
+ self.assertEquals(layer.GetClassificationColumn(),
+ internal_from_unicode(u'Fl\xe4che'))
except UnicodeError:
# FIXME: Obviously this will have to change if Thuban ever
# supports unicode properly.
Modified: trunk/thuban/test/test_load_1_0.py
===================================================================
--- trunk/thuban/test/test_load_1_0.py 2009-09-21 10:33:38 UTC (rev 2888)
+++ trunk/thuban/test/test_load_1_0.py 2009-09-27 20:36:21 UTC (rev 2889)
@@ -276,7 +276,7 @@
dbffile = self.temp_file_name("TestNonAsciiColumnName.dbf")
shpfile = self.temp_file_name("TestNonAsciiColumnName.shp")
dbf = dbflib.create(dbffile)
- dbf.add_field('Fl\xe4che', dbflib.FTDouble, 10, 5)
+ dbf.add_field(u'Fl\xe4che', dbflib.FTDouble, 10, 5)
dbf.write_record(0, (0.0,))
dbf.close()
shp = shapelib.create(shpfile, shapelib.SHPT_POLYGON)
@@ -306,7 +306,8 @@
# too
layer = session.Maps()[0].Layers()[0]
try:
- self.assertEquals(layer.GetClassificationColumn(), 'Fl\xe4che')
+ self.assertEquals(layer.GetClassificationColumn(),
+ internal_from_unicode(u'Fl\xe4che'))
except UnicodeError:
# FIXME: Obviously this will have to change if Thuban ever
# supports unicode properly.
More information about the Thuban-commits
mailing list