<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>On Apr 17, 2014, at 2:11 PM, Navya Shilpa Josyula <<a href="mailto:njosyu2@uic.edu">njosyu2@uic.edu</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div style="font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; ">Now I am trying to write CASTp information for each of my proteins into a separate file. As you suggested in earlier email, the processCastpID function is in the gui.py file but not in __init__.py file. Hope I am not missing anything here. As per my understanding, this function is fetching the 4 castp files of which I would require only ".poc" and ".pocInfo" files. From these two files I want to write the data of only atoms list, pocID and MS_Volume data into a single file for all 400 proteins in my dataset. Is there a link or any script available for such requirement?</div></blockquote><div><br></div>There are some fine points that I missed in my answer yesterday, and the situation is complicated further by your use of .pdb1 files instead of the "normal" entries.</div><div><br></div><div>So for one thing, if you are going to use the .pdb1 files, then you are going to have to run CASTp yourself on each and then process the results. In that case you might as well also analyze the .poc and .pocInfo files yourself to determine what pocket each atom belongs to (the next-to-last field in the .poc file) and the volume of that pocket (listed in the .pocInfo file).</div><div><br></div><div>The main point I missed in my reply, which may now be moot because of the .pdb1 thing, is that processCastpID() builds its own structure and therefore you would not open the PDB first, you would instead return the structure (along with the cavities list) from that method and make the structure available in chimera with:</div><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>chimera.openModels.add([structure])</div><div><br></div><div>and then proceed with selecting the right residues, using currentResidues() to list them, etc. I guess if you didn't want to process the .pdb1 CASTp files yourself (after running CASTp on the .pdb1) you could use processCastpFiles() to get the cavity list and structure and proceed as I just outlined. processCastpFiles <i>is</i> in __init__, unlike processCastpID() as you found!</div><div><br><blockquote type="cite"><div style="font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; ">Again, as mentioned in my last email, since my output files will be huge in size, will I be able to write my files directly to a database table in SQL server?</div></blockquote><br></div><div>I'm not much of an expert on this, but maybe this page would help: <a href="https://wiki.python.org/moin/DatabaseInterfaces">DatabaseInterfaces - Python Wiki</a></div><div><br></div><div>--Eric</div><div><br></div><br></body></html>