I’m trying to write a script in which grasshopper plays a role as an information visualization interface.
I’m working with Visual Studio Code where I can import libraries to python, and that program could consistently generate data.
The approach I tried so far is to save the data generated by external python to a CSV file and read the CSV file in grasshopper. To make it keep updating in grasshopper, I also made a timer so the script could keep reading the CSV file. However, whenever the script starts to read the CSV file, because the read and write could not happen at the same time, the external python would stop working.
Does anyone have any thoughts? I’ll appreciate your help!
I’d suggest using a SQLite database - which offers concurrent access -, instead of a comma-separated file.
On Unix-based systems, you can also create a first-in-first-out-pipe (cf. os.mkfifio), but not on Windows. Windows locks files when opened, if I remember right. This would prevent any other process from reading or writing to the same file or something like this.
what kind of data…i thought in the first place its a stream from some divice…in this case i would try a tcp or udp stream …but you cant use a while loop in grasshopper only some sort of a timer function…im very interested in this topic, but have no detailed solution.
You could write the data to a temporary csv file in python (make up your own filename or use the builtin tempfile module. Then you could try to copy the file to replace (or append to) your input csv file in a try...except... block, using a time.sleep(1) every time it raises an exception and thus wait for the file to become available.
Have you tried that before, because I had attempted something very similar, and in my experience it’s super finicky to get right timing-wise and error-prone.
No, the data is just generated locally by another external python program on the same device. As diff-arch suggested, I might try using another file format like SQLite rather than plain CSV as windows won’t allow writing and reading at the same time
One could implement FileSystemWatcher to only read the input file when it changes and thus trigger downstream computation. I’m pretty sure that’s what the native Read File component uses, which one could also use to dynamically read in e.g. a CSV or JSON file.
Yes, or use a contextmanager or semaphores, but I think that using a database is way easier than dealing with all the other stuff. SQLite is well documented and there is lots of sample code online that can be quickly adapted and it’s always nice to have your data in well-structured database any way.
I’m trying to import the System.Data.SQLite into the C# but I don’t know which version of .NET Framework the C# is working with. I tried .NET 2.0 but it seems not working with a warning like this:
Thanks again! It’s working well. While sometimes reading and writing may lead to some conflicts and result in errors in python, it is working. I guess using some kind of watcher might be helpful to solve it.
As @Dancergraham suggested above, you could catch those errors and simply ignore them, meaning if a sqlite3 exception ever gets raised, you simply skip it and try again by updating the GHPython component.
import Grasshopper as gh
import sqlite3
def update_component():
"""Updates this component, similar to using a Grasshopper timer."""
# written by Anders Deleuran (andersholdendeleuran.com)
def call_back(e):
"""Defines a callback action"""
ghenv.Component.ExpireSolution(False)
# Get the Grasshopper document
ghDoc = ghenv.Component.OnPingDocument()
# Schedule this component to expire
ghDoc.ScheduleSolution(
1, gh.Kernel.GH_Document.GH_ScheduleDelegate(call_back)
)
db_path = "/home/marshall/db_path.db"
try:
connection = sqlite3.connect(db_path)
# read database and process data
connection.close()
except sqlite3.IOError:
print "Unable to connect to database" # skip
if Run: # Run is a Boolean input
update_component() # try again
This is only sample code and should be refined further!
For instance, I would check separately before the try except statement whether the database exists and is a valid database. If it doesn’t the component could potentially try forever without ever realising that the database doesn’t exist.
You probably want to raise an exception or component error for that case. And there maybe others.
Here’s a function from my sqlite3 wrapper module that attempts to check those things:
import os
def is_sqlite3_database(db_path, strict=True):
"""Verifies whether db_path exists, is a file, and indeed a SQLite3 database.
Args:
db_path (str): An absolute path to a SQLite3 database file to verify
strict (bool): Optionally False to skip validating the SQLite3 header,
by default True
Returns:
True if db_path is a SQLite3 database file path, otherwise False.
"""
if not os.path.isfile(db_path):
return False
if strict:
db = open(db_path, mode='r', encoding="ISO-8859-1")
header = db.read(100)
if not header.startswith("SQLite format 3"):
return False
return True
I’m afraid I’ve not implemented that class. I believe @TomTom has some experience with file event watchers and might be able to help. And just to reiterate, before jumping down the rabbithole, one can dynamically read any text-based database file (e.g. CSV, JSON) using native Grasshopper components. Here’s a quick example manipulating coordinates in a CSV in a notepad and dynamically making some points from these in Grasshopper:
The FileSystemWatcher is a quite common utility function. Are we talking about C# or IronPython?
When you initialize it, just make sure to do it once for a definition and also make sure to dispose (= destruct) the instance, once the definition closes. Working properly with event, callbacks or event-alike patterns in Grasshopper is quite a challenge. I might come up with an example during this week. The class itself is quite straightforward. You use this class to listen to file system changes within a folder, and subscribe to certain events. Once the events are fired, e.g. a file has changed in that particular directory, then you perform an action in Grasshopper and trigger a re-computation of the whole solution. This way you don’t need to work with timers at all.
The alternative, as already mentioned, is using a local db. All you need to do is setting up a data-model and connect from different applications. As mentioned, databases are designed for concurrent access, but they may not trigger events. It’s more about storing data, not observing data. So then you may need to work with the timer approach.
There are more ways of Interprocess Communication. A local TCP/UDP connection, Pipes, Shared Memory etc… All of them are a little bit more advanced, but usually more efficent. Again, the big problem is properly injecting them in Grasshopper.