Dataset Viewer
Auto-converted to Parquet Duplicate
text
string
term_score
int32
term_score_v2
int32
Take the 2-minute tour × I had the Logitech MX-518 mouse, but it had been having issues with responsiveness, causing me to call support for a replacement. Instead of another 518, they sent me a Logitech G400 mouse because the 518 has been discontinued. This causes issues because, while the MX518 was supported by lomoco, the G400 mouse is unsupported. Running $ lomoco -s shows 001.003: 046d:c245 Unsupported Logitech device: Unknown. What I would like to do is lock the DPI of my mouse to a single value and remap the DPI+ and DPI- buttons to PgUp and PgDn on my keyboard. How would I accomplish this? Logitech G400 The buttons are, in order: 1. Button 1: Left-click 2. Button 2: Middle-click 3. Button 3: Right-click 4. Button 4: Mouse Wheel Up 5. Button 5: Mouse Wheel Down 6. Button 6: None 7. Button 7: None 8. Button 8: Thumb Button #1 9. Button 9: Thumb Button #2 10. Button 10: Task Switcher Button 11. Button 11: None 12. Button 12: None On the previous mouse (MX 518), buttons 11 and 12 were the DPI keys. One of the things that makes these buttons different than the rest is that applications such as xev do not recognize pressing them as an event, by default. On the MX 518 mouse, in order to make those buttons able to be altered / binded, they had to first be disabled. I believe that lomoco called it "Logitech SmartScroll / Cruise Control." On the G400, lomoco doesn't work and I am unaware of an alternative. Also, here is some output from xinput, in case it is helpful. user@localhost:~$ xinput list ⎜ ↳ Logitech Gaming Mouse G400 id=8 [slave pointer (2)] user@localhost:~$ xinput list-props 8 Device 'Logitech Gaming Mouse G400': Device Enabled (121): 1 Coordinate Transformation Matrix (123): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (248): 0 Device Accel Constant Deceleration (249): 2.000000 Device Accel Adaptive Deceleration (250): 1.000000 Device Accel Velocity Scaling (251): 1.000000 Device Product ID (238): 1133, 49733 Device Node (239): "/dev/input/event4" Evdev Axis Inversion (252): 0, 0 Evdev Axes Swap (254): 0 Axis Labels (255): "Rel X" (131), "Rel Y" (132), "Rel Vert Wheel" (247) Button Labels (256): "Button Left" (124), "Button Middle" (125), "Button Right" (126), "Button Wheel Up" (127), "Button Wheel Down" (128), "Button Horiz Wheel Left" (129), "Button Horiz Wheel Right" (130), "Button Side" (242), "Button Extra" (243), "Button Forward" (244), "Button Back" (245), "Button Task" (246), "Button Unknown" (241), "Button Unknown" (241), "Button Unknown" (241), "Button Unknown" (241) Evdev Middle Button Emulation (257): 0 Evdev Middle Button Timeout (258): 50 Evdev Third Button Emulation (259): 0 Evdev Third Button Emulation Timeout (260): 1000 Evdev Third Button Emulation Button (261): 3 Evdev Third Button Emulation Threshold (262): 20 Evdev Wheel Emulation (263): 0 Evdev Wheel Emulation Axes (264): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (265): 10 Evdev Wheel Emulation Timeout (266): 200 Evdev Wheel Emulation Button (267): 4 Evdev Drag Lock Buttons (268): 0 share|improve this question Have a look at the solution to a similar question. Give it a try and report back if it solves your issue. –  Mark Rooney Apr 6 '12 at 5:18 @MarkRooney That question doesn't seem to help, unfortunately. In that case, the mouse wasn't functioning properly. In my case, the mouse works exactly as intended by Logitech. I just want to remap the DPI buttons to have useful functions. –  Koviko Aug 8 '12 at 17:34 add comment 1 Answer up vote 2 down vote accepted @Koviko - I have a similar mouse - a Logitech MX1100 - that also has DPI buttons that aren't sent to the USB when pressed in default mode. I did some testing on my own, and eventually was able to figure out the codes to send the signal to switch the mouse into "Driver Mode", which then allowed me to use easygestures/xev to reassign the buttons. If you want, I can walk you through the steps I used to determine how to switch it off (I now have a script that I simply need to run on startup, as a very hack-y workaround, but it's working at least), but it involves setting up a VM and having a secondary mouse and sniffing the raw USB traffic, and unfortunately seems likely to be very mouse-specific. My steps (better ones almost certainly exist): 1) Have a Windows VM (with the Logitech SetPoint software installed; I used VirtualBox, because that's what I already had set up with WinXP for work), Wireshark, and gcc installed on your system. 2) Then I ran the following steps in a terminal: sudo modprobe usbmon sudo wireshark & sudo /usr/lib/virtualbox/VirtualBox & 3) Within Wireshark, choose to 'List the available capture interfaces...', and make a note of which USB bus number generates a ton of packets when you move your mouse around (mine was usbmon3, but I imagine that's purely based on which USB port your receiver is plugged in to). 3) From within VirtualBox (I needed to run as sudo in order to share the USB Controller), I edited the settings of the XP VM, and enabled both the USB Controller and the USB 2.0 (EHCI) Controller. Then I added a new USB Filter populated from an existing device, and selected my Logitech mouse's receiver (Vendor ID 046d, Product c245, for you) and then started up the VM. (Note: After this point, I needed a second mouse plugged in, because I had to give control over my regular mouse to the Windows VM so that the SetPoint software could see that it existed as something more than a generic mouse.) 4) In the VM, I then launched the SetPoint software, and went to the screen that lets you set custom actions for various buttons. Then back in Wireshark, I started a capture on the USB bus for the mouse, then immediately went in to the VM/SetPoint, and changed the button assignment from DPI +/- to Keystroke Assignment, then immediately went back to Wireshark and stopped the capture. (I repeated this about 10-15 more times, changing the settings to different modes, mostly because I wasn't sure how much data I'd need, but after reviewing, I really only needed the first 1-2 captures.) Assuming your mouse works vaguely similar to mine, which I'd guess it would, your capture would likely have a total of 16 frames, 4x GET DESCRIPTOR, then 3x(2xURB_CONTROL out + 2xURB_INTERRUPT in). What you're looking for are the 3 longer URB_CONTROL out frames. An example of one of my captured frames is: 0000 c0 80 64 36 00 88 ff ff 53 02 00 03 03 00 00 00 0010 5e 4b 25 50 00 00 00 00 f4 d9 08 00 8d ff ff ff 0020 07 00 00 00 07 00 00 00 21 09 10 02 01 00 07 00 0040 10 01 80 65 82 85 ff What we're looking for are the last 7 bytes from the response (in the above, the '10 01 80 65 82 85 ff'), from each of the longer 'URB_CONTROL out' frames. Finally, I downloaded the source of the "g_hack" from Git, and cobbled in both my mouse product code at the top, and a new option (I set it to 0/1 with an if statement within them since it was just a very crude proof of concept) which would switch my mouse into "driver mode" or "DPI mode". After that, all that was required was to set up the newly available mouse buttons in your choice of remapping programs (I used easygestures because that was the first thing with a UI I found - it may or may not have a superior replacement). share|improve this answer Everything regarding g_hack becomes rather cryptic, to me. Could you post the code that you wrote somewhere so that this can be easier to follow? I'm not sure what to do with those last 7 bytes from Wireshark, where you tossed in your product code, or where you added your new option. –  Koviko Aug 11 '12 at 18:15 Certainly! See here for my version. I've annotated each section with my username to easily jump, but you've basically got two places to add in the product code detection, and then two more places to add the 7 bytes you detected with Wireshark. –  Icehawk78 Aug 13 '12 at 14:44 Sadly, I'm having difficulty adapting this to my packet captures. I only have 2 bytes on 0040. No matter what I put into send_msg, I get the error "error sending to device: Invalid argument." I also tried changing uref.report_id to 0x8E and to 0x20 in send_report, but I still got the same error. –  Koviko Aug 14 '12 at 5:35 Are these captures the entirety of the exchange when you reassigned the buttons, or just the URB_CONTROL out frames? It looks like your mouse is using a slightly different protocol than the others I've looked at. If you can pastebin the entire exchange for each capture, I'll see if I can tease out a better attempt for you to try. –  Icehawk78 Aug 15 '12 at 14:24 Here are the full exchanges for unassigning DPI+, assigning DPI+ to PgUp, unassigning DPI-, and assigning DPI- to PgDn. Only one of them actually has URB_INTERRUPT in frames. The rest are composed entirely of URB_CONTROL out frames. –  Koviko Aug 15 '12 at 14:50 show 1 more comment Your Answer
11
6
is implemented with id ? Aahz aahz at Sun Nov 4 06:12:10 CET 2012 Hans Mulder <hansmu at> wrote: >On 3/11/12 20:41:28, Aahz wrote: >> In article <50475822$0$6867$e4fe514c at>, >> Hans Mulder <hansmu at> wrote: >>> On 5/09/12 15:19:47, Franck Ditter wrote: >>>> - I should have said that I work with Python 3. Does that matter ? >>>> - May I reformulate the queston : "a is b" and "id(a) == id(b)" >>>> both mean : "a et b share the same physical address". Is that True ? >>> Yes. >>> Keep in mind, though, that in some implementation (e.g. Jython), the >>> physical address may change during the life time of an object. >>> It's usually phrased as "a and b are the same object". If the object >>> is mutable, then changing a will also change b. If a and b aren't >>> mutable, then it doesn't really matter whether they share a physical >>> address. >> That last sentence is not quite true. intern() is used to ensure that >> strings share a physical address to save memory. >savings are minor; the time savings may be significant. As others have pointed out, using ``is`` with strings is a Bad Habit likely leading to nasty, hard-to-find bugs. intern() costs time, but saves considerable space in any application with lots of duplicate computed strings (hundreds of megabytes in some Aahz (aahz at <*> More information about the Python-list mailing list
12
6
The stargate of devices Open your topic Your latest topics No topics found, let‘s open a new one! QEST is a stargate between the universe of devices which speak MQTT, and the universe of apps which speak HTTP and REST. In this way you don't have to deal any custom protocol, you just GET and PUT the topic URI, like these: $ curl -X PUT -d '{ "hello": 555 }' \ -H "Content-Type: application/json" \ $ curl http://mqtt.matteocollina.com/topics/prova { "hello": 555 } Let's build cool things with MQTT, REST and Arduino! Here we are dreaming a Web of Things, where you can reach (and interact) with each of your "real" devices using the web, as it's the Way everybody interacts with a computer these days. However it's somewhat hard to build these kind of apps, so researchers have written custom protocols for communicating with the devices. The state-of-the-art protocol for devices is MQTT, which is standard, free of royalties, and widespread: there are libraries for all the major platforms. The state-of-the-art protocol for apps are REST and HTTP, so why can't we bridge them? So QEST was born. Mailing list
6
6
Take the 2-minute tour × So this is how I set up my project: git init --bare Later I learned that if you want to work on a project with multiple users this is how I should have done it: git init --bare --shared Now I tried to work like that and luckily we are in the beginning so I could set up git again. I still wonder though when you're in the middle of a project you can't do that. Is there a way that i can change a bare repo to a shared one? share|improve this question add comment 4 Answers up vote 17 down vote accepted Since the --shared option just sets the permissions on everything in the repository to group-writable you could do this manually later: $ chmod -R g+w the/repo/path Plus, add sharedrepository = 1 under the [core] section in .git/config. Shared repos also have the following receive option defined by default (which you may or may not want): denyNonFastforwards = true share|improve this answer Aha ok! Good to know, wish I had asked this before. Thanks! –  bottleboot Jan 16 '12 at 16:46 Ok, I see! I just read the @jørgensen answer which confirms that. Stackoverflow should have a combined answer button :D! Thank you all a lot that was very enlightening! –  bottleboot Jan 16 '12 at 16:53 Didn't work for me. It required chmod -R g+s .... A fresh git init --bare --shared will have the group rights "rws". (Ubuntu 12.04) –  Unapiedra Jan 29 at 15:11 add comment Besides chmod -R g+w, you also need to edit (.git/)config and set core.sharedRepository = .... For ..., there are a handful of values, described in git-init(1). share|improve this answer Ok! That seems to completes my suspicion that I also needed to change the config. Thanks! –  bottleboot Jan 16 '12 at 16:49 add comment Probably if you try to share an existent repository, you may have lots of different users commits. 1.If you have super user permission, you can go forward and change all permissions by yourself using the step two, in any-other case you will need to ask all users with objects created with their users, use the following command to know who they are: $ ls -la | awk '{print $3}' | sort -u <your user_name> <his user_name> 2.Now you and all file's owner users will have to change those files permission, doing: $ chmod -R 774 . 3.After that you will need to add a new property that is equivalent to --shared=group done for the new repository, according to the documentation, this make the repository group-writable, do it executing: $ git config core.sharedRepository group share|improve this answer add comment If you're trying to share the repository off of the the host it is on, there are additional configuration steps you have to make (ssh stuff). share|improve this answer I don't think that is what we're doing for this current repo. Thanks though! –  bottleboot Jan 16 '12 at 16:46 add comment Your Answer
13
9
Take the 2-minute tour × The command ls .* when run gives as output the following : • All the files in the current directory starting with a . (hidden files) • All the files in the hidden directories present in the current directory • All the files in the current directory • All the files in the parent directory Why does the command ls *. not display : • All the files in the current directory • All the files in the parent directory Reason I am thinking so is : The regular expression *. should match both . and .. So ls should be run on both and thus the output which I am expecting should be displayed share|improve this question add comment 2 Answers up vote 4 down vote accepted It's because * doesn't match files starting with a . by default. Consider the following directory: $ ls -la total 8404 drwxrwxrwx 2 terdon terdon 8105984 Dec 31 13:14 . drwxr-xr-x 153 terdon terdon 491520 Dec 30 22:32 .. -rw-r--r-- 1 terdon terdon 0 Dec 31 13:14 .dotfile -rw-r--r-- 1 terdon terdon 0 Dec 31 13:14 file3. Let's see what each of the globs you used expands to: $ echo .* . .. .dotfile $ echo *. $ echo * file1 file2 file3. As you can see, the * does not include files or directories starting with . so both ./ and ../ are ignored. The same thing happens with your ls example. In bash, you can change this with the dotglob parameter: $ shopt -s dotglob $ echo .* . .. .dotfile Other shells behave differently, for example csh: % echo .* . .. .dotfile share|improve this answer Great explanation –  X Tian Dec 31 '13 at 11:45 add comment The rule for filename expansion have a special case for . as the first character in a filename: it must be explicitly matched (i.e. the pattern must contain a starting ., or . after a /). Otherwise these files are not candidates. This is why your first version does pick up filenames that start with ., but the second doesn't. * doesn't match . as the first character of a filename. POSIX Shell Command Language describes it as: If a filename begins with a period ( '.' ), the period shall be explicitly matched by using a period as the first character of the pattern or immediately following a slash character. The leading period shall not be matched by: • The asterisk or question-mark special characters • A bracket expression containing a non-matching list, such as "[!a]", a range expression, such as "[%-0]", or a character class expression, such as "[[:punct:]]" It is unspecified whether an explicit period in a bracket expression matching list, such as "[.abc]", can match a leading period in a filename. Your shell might have options to change this behavior. Bash has this for instance (Filename expansion): Note that these are not regular expressions. .* as a regex would match anything at all (including nothing). *. would be ill-formed. share|improve this answer add comment Your Answer
19
11
Re: CSS Variables Draft Proposal From: Boris Zbarsky <bzbarsky@MIT.EDU> Date: Mon, 14 Feb 2011 16:28:14 -0500 Message-ID: <4D599E6E.8050002@mit.edu> On 2/14/11 3:46 PM, Tab Atkins Jr. wrote: > This term is underdefined for my usage, > and perhaps not exactly what I want, though. > You can't put a unit in and expect to use it as a unit, for example, the "@var $foo > px;" is perfectly fine if used as a keyword. This shouldn't be hard - > the intent is just that you can't store a "partial value" in a > variable and then compose it with something else to get a whole value > (so you can't do something like "@var $foo px; p { width: 200$foo; > }"). > I heard conflicting statements about whether "token" was correct here, > so I just avoided the issue and used a different word. What is the > correct term? I think the problem you're having is that this concept of "value" is not really exactly how the CSS spec is defined at the moment, and different UAs have different internal concepts of "value". At least as far as I can tell. Offhand, I wouldn't be willing to claim that the same string is always treated as the same kind of "value" in Gecko, even. It might well be context-dependent. I'm not saying that's the case; just that nothing ensures that it's not. I agree that a raw token stream may not be the right thing due to things @var foo 255, 255); which could add pretty oddly if $foo is used like so: color: rgb(0, $foo, 0); (though in this case I think it'll just cause the whole property to be discarded). But if we require that any close parens/curlies/brackets be matched by open parens/curlies/brackets in the variable definition, then it seems like a token stream with that restriction might be ok. It would certainly make it much simpler to specify how variable substitution should work: you just tokenize the template, replace the $foo with the corresponding token stream, and then parse the resulting token stream. If you want to do this in terms of values, then you have to define somewhere what the value sets for various properties are, which sounds like a pretty major undertaking. > "component value" is defined in CSS2.1, at > <http://www.w3.org/TR/CSS21/about.html#value-defs>. It's not exactly > what I want, but it appears to be closer in intent than "token". Hmm. So the problem is that nothing guarantees that different value types as defined here will be syntactically distinct (and in fact they're not). Put another way, you can't tell what sort of value it is until you see how it's being used. That seems unfortunate. >> Currently in a situation like that (same property specified multiple times >> in a declaration) only the last specified value needs to be kept by the UA. >> It sounds like your proposal is that this is no longer the case with >> variables, right? > Yes, though your gloss isn't completely correct, right? If you make a > declaration block contain the same property twice, and use the CSSOM > to twiddle whether the second one's value is valid or not, you have to > pay attention to the first one. Gecko certainly doesn't. Invalid stuff is dropped at _parse_ time and not exposed to the CSSOM at all. In particular, up until now invalid stuff has always been dropped at parse time, since the whole point is that if you don't know what it is you can't parse it apart from just skipping over it. > Do you just let this case fall down a > slow path, where you effectively reparse the block? No; this case simply doesn't arise right now. You're introducing it, by requiring some sort of non-parse-time discarding behavior. >>> Scoped stylesheets (those created with a `<style scoped>` element in >>> HTML) have their own nested global scope. Variables created or >>> imported within a scoped stylesheet are only available within the >>> scoped stylesheet; variables created in the outer global scope are >>> still available in a scoped stylesheet. >> I'm not sure I follow this. Say I have this markup: >> <div> >> <p> >> </p> >> </div> >> with stylesheets scoped to the<div> and<p>. If I have an @var in the >> div-scoped sheet, can the p-scoped sheet use it? Note that rules in the >> div-scoped sheet apply to the<p> and all, in general. > No. This is defined by HTML - I'm just restating the restrictions > that<style scoped> applies, for clarity. What you're stating is different from what the HTML5 draft says about <style scoped> as far as I can tell. Again, the div-scoped sheet's rules apply to the <p> if I read the <style scoped> draft correctly, but you're saying its @vars do not? >> This needs to happen to understand how variables can actually be used; see >> above; > Does what Bjoern wrote help here? Not terribly, no. I'll try rereading it again to see if I can make sense of it this time... > It'll be overrideable, so I doubt it'll cause any problems. You mean replaceable? It can still cause problems even so (esp. if multiple scripts interact, one of which writes it and one of which wants to mess with your new APIs). > I'd also like there to be a window.css which forwards to > window.document.css, for ease of use. That seems to have even more scope for problems. >>> To add a new map entry, we first define `css.stylesheet`, which >>> implements the `StyleSheet` interface. This stylesheet is treated as >>> an author-level sheet placed after all other author-level sheets. >>> Creating a new map entry creates a corresponding @var rule in this >>> stylesheet. >> What about adding other rules to the sheet? Would they be applied to the >> document? > It acts like a stylesheet in the document, so yes. So a question.... apart from the handling of !important, how is this different from the override sheet stuff CSSOM specifies already? >>> Variables appear as themselves in specified values. If the variable is >>> defined and valid, its computed value is the value of the variable. If not, >>> its computed value is the variable name. >> I don't understand this at all, if invalid values are supposed to be treated >> like parse errors.... What is this trying to say? > Invalid values are no longer parse errors, since some time before you > quoted this out of the draft. That doesn't answer my question. Consider this style: div { color: red; color: $foo; } p { color: $foo; } What is the specified value of "color" for <div>s? What is the computed value of "color" for <p>s? How do I reconcile those answers with the text quoted above? >> 2) Can the type be changed via the CSSOM? I assume yes, to make Daniel >> happy. ;) > Yeah. A followup: what happens if you try to change to an unknown type? >> @var color $foo 12px; >> * { font-size: $foo; } >> then do I get 12px font-size? Or is the variable considered invalid if its >> value in the @var can't be parsed as its type? > The validity of the variable can be verified at parse time in this > proposal, so the $foo declaration would be invalid, and no $foo > variable would be created. The font-size declaration is then invalid, > as it references an undefined variable. OK (though this is not clear from the spec). Let's try a more interesting testcase: @var color $foo red; * { font-family $foo; } If I have a font with a family name of "red" on my system, will I get it? >>> The previous suggestion seems to put the typing in the wrong place. >>> Typing doesn't help the CSS developer in any way, as CSS can figure >>> out types as necessary all by itself. >> Maybe... and maybe not. It sort of depends on what variable values "are". >> See beginning of this mail. > I mean that you can figure out types at the time of use. You can't > possibly infer types at definition time, as there is too much > ambiguity. OK, but my point is that sometimes you can't really figure out types at time of use either, without falling back on the actual tokens involved. >>> This would only work if the OM interfaces were carefully designed in >>> such a way that there is never ambiguity >> Seems fragile.... > I agree. We want to try this and see if it works, though, before > throwing it out. The problem is that by the time we decide it doesn't work the damage will have been done: we'll have interfaces we can't drop for compat reasons but that will sort of suck in actual use. See getComputedStyle as it is currently practiced. Received on Monday, 14 February 2011 21:29:17 GMT
12
6
Take the 2-minute tour × I am creating very simple CMS for my organisation. My strategy is to embed editable content between tags called < editable >. However to hide these from the browser I am commenting them out. So an example of an editable region will look like this. <!-- <editable name="news_item> Today's news is ... </editable> --> With the content "Today's news is ... " being picked up by the CMS and made editable in the online HTML editor. I would like to be able to "grab" the name attribute's value as well as the content contained within the tags. Is there a simple way to do this with XPath, XQuey type things, or is regex the best way to go ( ]esp. given that the regex will not need too much fault tolerance, since I know exactly what the xml will be, because I will be writing the code that generates it). share|improve this question why are you putting news content into a webpage and then commenting it out to hide it from the webpage? Have you considered storing this editable content in a database? I suppose I don't fully understand the concept though –  Carson Myers Jun 15 '09 at 7:00 Please correct me if I am missing something very obvious but why can't you keep your editable content as 'hidden' if you want to hide it from browsers instead of adding them as comments? –  Aamir Jun 15 '09 at 7:02 no reason why you can't, just I've written a number of CMS...es, and I was just having a hard time understanding the way you are storing the data. In any case, there are already a number of good answers. –  Carson Myers Jun 15 '09 at 7:05 We want to display the content such as News Items ... or Main Page text .... but we want this to be editable. So you can think of the <editable> tags as placeholders, which tell our app, what content is editable. The point of this is that we do not need a DB, and can simply display flat HTML files. Our needs are very simple and this is a quick and dirty solution. –  Ankur Jun 15 '09 at 7:16 add comment 6 Answers up vote 3 down vote accepted By DOM Parser, do you mean javascript? If so, this blog post suggests that you can indeed slice and dice HTML comments. And, because mentioning javascript without mentioning jQuery is a sin, here's a jQuery plugin that will find all the HTML comments for you. share|improve this answer I like the idea of using jQuery –  Ankur Jun 15 '09 at 7:17 The blog talks about exactly what I want to do. Good to know I am not the only one. –  Ankur Jun 15 '09 at 7:19 add comment Most parsers are able to get comments without a problem. They will not probably parse them into a DOM structure, but you could do that with them manually once you get the actual comments. This is an example using BeautifulSoup with Python: >>> from BeautifulSoup import BeautifulSoup, Comment >>> html_document = """ ... <html> ... <head> ... </head> ... <body> ... <h1>My Html Document</h1> ... <!-- This is a normal comment. --> ... <p>This is some more text.</p> ... <!-- <editable name="news_item">Today's news is Paolo Rocks!</editable> --> ... <p>Yet More Content</p> ... </body> ... </html> ... """ >>> soup = BeautifulSoup(html_document) >>> comments = soup.findAll(text=lambda text:isinstance(text,Comment)) >>> comments [u' This is a normal comment. ', u' <editable name="news_item">Today\'s news is Paolo Rocks!</editable> '] >>> for comment in comments: ... editable = BeautifulSoup(comment).find('editable') ... if editable is not None: ... print editable['name'], editable.contents news_item [u"Today's news is Paolo Rocks!"] share|improve this answer add comment The whole point of a comment is that the DOM will not parse the content. So the whole comment is just text. I'd be inclind to use RegEx in this case. However if you certain the content is HTML you would create a DOM element (say a DIV) and assign the comment text to the innerHTML. The you could examine the DOM created from the element. Once you aquired what you need you could drop the DIV element which you would never have added to the current document. share|improve this answer You could also use display:none on the div so it doesn't take up space or display its content, and then just leave it there with the data inside. That should work unless you run into browser compatibility issues. –  teh_noob Jun 15 '09 at 7:06 add comment I'm pretty sure that you'd need to manually parse it via regex or another method. Comments aren't seen as DOM elements as far as I'm aware. share|improve this answer Comments are DOM elements. Is just that their contents aren't parsed as XML. –  Ionuț G. Stan Jun 15 '09 at 7:02 add comment You can use a DIV with a costum attribute like Dojo does a lot: <div ParseByCMS="true">foobar foo bar foobaz</div> After that you just use javascript or xslt to parse it and remove it. share|improve this answer add comment If you're using PHP. $xpath = new DOMXpath(new DOMDocument()); // Search for comments $comments = $xpath->query('//comment()'); share|improve this answer add comment Your Answer
12
6
Take the 2-minute tour × I added a file to the index with: git add somefile.txt I then got the SHA1 for this file with: git hash-object somefile.txt I now have a SHA1 and I would like to retrieve the filename of the object in the index using the SHA1. git show 5a5bf28dcd7944991944cc5076c7525439830122 This command returns the file contents but not the name of the file. How do I get the full filename and path back from the SHA1? share|improve this question add comment 4 Answers up vote 8 down vote accepted There's no such direct mapping in git as the name of the file is part of the tree object that contains the file, not of the blob object that is the file's contents. It's not a usual operation to want to retrieve a file name from a SHA1 hash so perhaps you could expand on a real world use case for it? If you're looking at current files (i.e. the HEAD commit) you can try the following. git ls-tree -r HEAD | grep <SHA1> If you want to find the contents in previous commits you'll need to do something more like this. git rev-list <commit-list> | xargs -n1 -iX sh -c "git ls-tree -r X | grep <SHA1> && echo X" share|improve this answer Thanks for that. I didn't know you could do that –  Jonathan Jan 20 '09 at 9:01 If you don't know what to put in for <commit-list>, --all will search across all branches in the repository. –  EoghanM Sep 15 '11 at 13:53 My version of git ls-tree only accepts 1 revision as argument (git v1.7.4.4). I adapted yours to for rev in $(git rev-list --all); do git ls-tree -r $rev | grep $SHA; done | uniq –  dboehmer Nov 1 '11 at 19:06 @halo: I'm fairly sure everyone's ls-tree takes a single revision argument, that's why I used -n1 with xargs. –  Charles Bailey Nov 1 '11 at 21:59 @CharlesBailey: Ah, never knew about the -n option. Seems to be very useful. Thanks for the hint! However the command as listed above didn't work for me. Don't know why but luckily solved my problem in the meantime. –  dboehmer Nov 2 '11 at 9:14 add comment The following shell script is heavily based on http://stackoverflow.com/questions/223678/git-which-commit-has-this-blob and the answer provided by Aristotle Pagaltzis. # go over all trees | while read tree commit subject ; do git ls-tree -r $tree | grep "$obj_hash" \ | while read a b hash filename ; do if [ "$hash" == "$obj_hash" ]; then echo $f if $f ; then break; fi if $f; then break; fi I'm sure someone could beautify this script but it does work. The idea is to look at all trees commited and search for your specific hash. share|improve this answer add comment Commit the file and note the sha1 hash of the commit object. After that use git ls-tree <commit-sha1> and you will get the names of the files with the hashes. Check the manual pages for more options. share|improve this answer Good answer IMO! –  Rob Oct 29 '09 at 23:24 add comment git rev-list <commit-list> won't include any commits which were for example removed by rebase -i and now are referenced only by reflog, so if blob is not found by command above you should check also reflog ie like this: git reflog --all | cut -d\ -f1 | xargs -n1 -iX sh -c "git ls-tree -r X | grep <BLOB_SHA> && echo X" share|improve this answer add comment Your Answer
5
3
Take the 2-minute tour × I'm making an SCons file for building Docbook documentation. In order to trace dependencies I would like some way to resolve catalog file lookups to an absolute path to a file. So say I have a bit of Docbook XML : <title>Docbook example document</title> <xi:include href="file:///common/logo.xml" <xi:include href="chap1/chap1.xml"/> <xi:include href="chap2/chap2.xml"/> <xi:include href="chap3/chap3.xml"/> <xi:include href="chap4/chap4.xml"/> and a catalog.xml file : <catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog"> rewritePrefix="file:///home/kst/svn/TOOLS/Docbook/stylesheet/" /> rewritePrefix="file:///home/kst/svn/TOOLS/Docbook/common/" /> <nextCatalog catalog="/etc/xml/catalog" /> Getting the xinclude href string is no problem using lxml but I'm stuck there. What I need is some way to get the absolute filename that file:///common/logo.xml resolves to (in this case /home/kst/svn/TOOLS/Docbook/common/logo.xml) from the catalog file. It needs to be some kind of Python code so I can use it in my SConstruct file without too much hassle. Any help is appreciated. share|improve this question add comment 1 Answer up vote 0 down vote accepted Lxml uses the catalog support from libxml2. Use the environment variable XML_CATALOG_FILES to provide a list of catalogs (you could set this from python as well, using os.environ), or, if this variable is not present, it checks for the existence of /etc/xml/catalog (can't use this one on windows of course). An alternative would be to use a custom URI resolver. You can find more information in the lxml docs EDIT: apparently, the question was not about the actual xinclude processing, which works, but about a way to "query" the catalog, or ask it for the actual filenames that would be used for the inclusions. Lxml (at least currently) has no API to do that. The underlying libxml2 library does support this, however, and the "original" libxml2 python bindings allow you to do this (easy documentation is lacking though, the docstrings in the source code of the libxml2 help, however). So, although this module is not nearly as nice to use than lxml, it seems to be your best bet. Example which seems to work: >>> import libxml2 >>> libxml2.loadCatalog('catalog.xml') >>> print libxml2.catalogResolveURI('file:///common/logo.xml') share|improve this answer I've been trying to do this but I haven't been able to get it to work. Note that I am not interested in validating my document but in getting the filenames of any xml files the document includes and therefore depends on. –  Kevin Steffensen Aug 29 '11 at 12:25 As far as I know, it does not only apply to validating, but also to xinclude resolving. How dit you try it? Did you get any specific errors? –  Steven Aug 29 '11 at 15:16 @Kevin Steffensen: I just tested a simple example myself, and xinclude with a catalog seems to work just fine? It would seem that there is something wrong with either the file location, or your xpointer (which would need an id attribute with the value of "logo", I used xml:id="logo" in my test. You could maybe try without the xpointer to check whether you can include the whole document first, then go on with the xpointer) –  Steven Aug 29 '11 at 18:45 Actual parsing of the document using xinclude and xpointer works fine. But I want to scan some xml file and find out the filenames of the files that xml file depends on and this is where I fail. I've tried iterating over the <xi:include> elements and grabbing the href attribute. This works but gives me references like 'file:///common/example.xml'. I need to change this reference into an actual filename using the information in my catalog somehow. –  Kevin Steffensen Aug 29 '11 at 19:51 Ah, you mean you do NOT want to process the xincludes and actually include the content, only look up the actual filenames via the catalog? (Although I think libxml2 supports that, lxml has no api to do that as far as I know) –  Steven Aug 29 '11 at 20:05 show 2 more comments Your Answer
10
6
Take the 2-minute tour × As with the command line in Pylons call the REST function from controller such as update? How to pass a request.POST to update function? share|improve this question add comment 2 Answers up vote 1 down vote accepted You need to use paster's post command. Below, I post to /login/attempt of a local app I've wrote. $ paster post development.ini /login/attempt email_address=me password=invalid ## It returns this JSON {"status": "fail", "value": "me is not a registered email address."} Here is the docs for paster post - Usage: C:\cygwin\home\jaime\virtualenv\sstesting\Scripts\paster-script.py post [options] CONFIG _FILE URL [OPTIONS/ARGUMENTS] Run a request for the described application This command makes an artifical request to a web application that uses a paste.deploy configuration file for the server and application. Use 'paster request config.ini /url' to request /url. Use 'paster post config.ini /url < data' to do a POST with the given request body. If the URL is relative (doesn't begin with /) it is interpreted as relative to /.command/. The variable environ['paste.command_request'] will be set to True in the request, so your application can distinguish these calls from normal requests. Note that you can pass options besides the options listed here; any unknown options will be passed to the application in environ['QUERY_STRING']. Options: -h, --help show this help message and exit -v, --verbose -q, --quiet -n NAME, --app-name=NAME Load the named application (default main) --config-var=NAME:VALUE Variable to make available in the config for %()s substitution (you can use this option multiple times) --header=NAME:VALUE Header to add to request (you can use this option multiple times) --display-headers Display headers before the response body share|improve this answer add comment The simplest thing would be to make a HTTP POST request directly: $ curl -d 'arg1=value&arg2=another' http://host/path/controller/responds/to/ share|improve this answer add comment Your Answer
8
3
Take the 2-minute tour × I'm a mac user giving vim a serious try. Most of the GUI editors I'm used to allow me to open a directory as a "project" by executing a command like: edit ~/www/example.com/ The vim equivalent vim ~/www/example.com/ will show me a list of files in the directory, and I can open them. But it does not set vim's working directory to that path, I have to run :cd . to set the working directory. Is there some way, perhaps with a shell script, to open vim and have it's working directory set to a given path? I'm actually using MacVim, if that makes any difference. share|improve this question In directory view just hitting c will cd you into that directory - doesn't do exactly what you want, but its worth knowing. –  Michael Anderson May 8 '11 at 23:27 It's ok to propose answers to your own question. I recommend moving your work in progress answer out of the question, since it is not part of the question. There is even a badge you can earn for answering your own question with a score of 3 or higher. (ps nice answer!) –  Ziggy May 9 '11 at 4:49 @Ziggy thanks. I did try answering my own question, but it did not allow it (something about waiting 24 hours before answering your own question?). Wasn't sure what to do, but have now posted it as an answer. –  Abhi Beckert May 10 '11 at 4:55 add comment 5 Answers (cd /path/to/dir && vim file) Less so: vim /path/to/dir/file +':cd %:h' You can always map :cd %:h to a convenient key or put in an autocommand (I wouldn't actually recommend the latter, but there is no arguing about taste) Oh and for directories instead of files: :cd % is quite enough share|improve this answer Thanks, that pointed me in the right direction! But isn't the perfect answer, because the "file" you refer to does not exist, I want to open vim with a given working directory, not open a file in vim (the file I want to open is likely to be several sub directories deep) –  Abhi Beckert May 8 '11 at 23:01 add comment up vote 3 down vote accepted Thanks to @sehe's suggestions, I came up with this. Not sure if it's the best solution, but it seems to work. if [ "$#" -eq 1 ];then # is there a path argument? if test -d $1;then # open directory in vim vim $1 +':cd %' else # open file in vim vim $1 +':cd %:h' else # no path argument, just open vim share|improve this answer add comment Would this help? set autochdir I found it http://vim.wikia.com/wiki/Set_working_directory_to_the_current_file share|improve this answer I think autochdir would break my workflow, I work with projects containing several thousand files across hundreds of directories and need the working directory to be the "root" of the project. –  Abhi Beckert May 8 '11 at 23:03 add comment Try adding the following to your .vimrc let g:netrw_liststyle=3 let g:netrw_keepdir=0 This will make the directory browsing use a tree style for showing the files (you can expand a directory by putting the cursor on a directory and hitting enter) and make the current working directory also be the one you are browsing. You might also be interested in the NERDTree plugin that provides a directory browser that is more advanced than the built in one. It has an option let g:NERDTreeChDirMode=2 to make the current directory match the root of the displayed tree or let g:NERDTreeChDirMode=1 to change the directory whenever you use a command (:e or :NERDTree) to browse a new directory. share|improve this answer add comment $ cd ~/my/working/directory $ vim . share|improve this answer add comment Your Answer
7
6
Take the 2-minute tour × I am using python 2.7 & python 3.1.3. But in my python i am unable to "import gdb" It is giving me error as >>> import gdb Traceback (most recent call last): ImportError: No module named gdb Whats a reason for this, how should i has to solve this problem. share|improve this question Maybe a dup of stackoverflow.com/questions/3482869/… –  Donovan Jan 25 '11 at 10:45 Alberteddu have u worked on this. cos i am unable to understand the documentation given in the link. Can you please guide me. –  Sagar Gupta M. Jan 25 '11 at 11:03 What's your OS? –  Donovan Jan 25 '11 at 11:18 my os is windows xp –  Sagar Gupta M. Jan 25 '11 at 14:19 You did not give enough information in your question. Where did you hear about this file, and what are you expecting it to do? What are you trying to achieve? –  Sven Marnach Jan 27 '11 at 15:11 show 3 more comments 4 Answers import gdb only works when your Python code in running within the GDB process. It's not supposed to work from the regular system Python interpreter. • GDB embeds the Python interpreter so it can use Python as an extension language. • You can't just import gdb from /usr/bin/python like it's an ordinary Python library because GDB isn't structured as a library. • What you can do is source MY-SCRIPT.py from within gdb which is equivalent to starting gdb with gdb -x MY-SCRIPT.py. Here's an example, save the file below to t.py: import gdb gdb.execute('file /bin/cat') print o $ gdb -q -x t.py and you'll see the PLT stub for exit() disassembled. On x86-64 Linux: Dump of assembler code for function exit@plt: 0x0000000000401ae0 <+0>: jmpq *0x20971a(%rip) # 0x60b200 <exit@got.plt> 0x0000000000401ae6 <+6>: pushq $0x3d 0x0000000000401aeb <+11>: jmpq 0x401700 End of assembler dump. I collected some resources on learning the GDB Python API here. share|improve this answer add comment You can follow this tutorial to install PythonGDB. The Python code depends on a C extension. For Windows, there is a recent enough gdb build in MinGW, but it doesn't seem to include the Python module you can import (still supports Python scripting in gdb). You have to install MinGW and then install the gbd package using mingw-get install gdb. If you use Cygwin, there's again a recent enough gdb in Cygwin Ports, without a Python module but with Python scripting support. I suppose it'd be possible to build gdb from source in either platform and get the Python module. share|improve this answer PythonGDB does not include any file of this name (at least not in my installation). There is a file of this name in Pydb though. With the information given, it's imply undecidable what the OP is after. –  Sven Marnach Jan 27 '11 at 15:21 There's a gdb module, though, so he can indeed import gdb. –  TryPyPy Jan 27 '11 at 15:23 Are you able to run gdb by itself on Windows? How did you install it? –  TryPyPy Jan 27 '11 at 15:35 Sigh, there's no gdb module in MinGW's gdb. –  TryPyPy Jan 27 '11 at 15:46 Ok TryPyPy, i will try to give some more information as much as possible. I am working in WINDOWS XP, trying to build a script in PYTHONWIN32, i am writing this script so that it has to invoke a shell provided by MinGW, there it has to run gdb. In this gdb python script as to run my program. Every thing has to controlled by python script. 1st layer -> Python Script 2nd layes -> gdb 3rd layer -> my program. –  Sagar Gupta M. Jan 28 '11 at 6:07 add comment I can't test now, but I think you need to configure and build a python enabled gdb. Take a look at this guide. Hope that helps. Edit: this is outdated, I think. Anyway, you always need to build and configure a python enabled gdb. You can script gdb using the Python programming language. This feature is available only if gdb was configured using --with-python. You have to configure gdb using that option: Where location is the location of python you would like to use gdb with. share|improve this answer add comment I just ran into the similar situation when trying to debug Webkit: $ python Tools/gdb/webkit.py Traceback (most recent call last): File "Tools/gdb/webkit.py", line 38, in <module> import gdb ImportError: No module named gdb I then realized that this script should be invoked in gdb to make it working: (gdb) source Tools/gdb/webkit.py (gdb) p run $1 = (const WebCore::TextRun &) @0x7fffffffa450: {m_characters = "Plugin Testsa", m_len = 12, m_xpos = 0, m_padding = 0, m_allowTabs = false, m_rtl = false, m_directionalOverride = false, m_applyRunRounding = true, m_applyWordRounding = true, m_disableSpacing = false} Hope this helps. share|improve this answer add comment Your Answer
14
7
Take the 2-minute tour × I merged the beta branch into the master branch. I pushed to origin. I now want master to be as it was prior to the merger both locally and remotely. A good answer for undoing a merge that was already pushed suggests git revert -m 1 commit_hash If this is indeed the way to go, how can I determine commit_hash? I unsuccessfully tried the hash returned by merge-base: $ git merge-base --all master beta $ git revert -m 1 1f4b949b7ef97abf913ae672e3acd0907abfac1b error: Mainline was specified but commit 1f4b949b7ef97abf913ae672e3acd0907abfac1b is not a merge. fatal: revert failed I've examined both git-log and gitk renditions of the branches, but they're very long, and I am uncertain enough of my interpretation to feel I should seek assistance before making a perhaps bigger mess. Beta was derived from v2 which was derived from master. There have been some mergers from master into v2 and beta along the way as I've kept the new branches up-to-date with master. The merger in the direction from beta into master was a mistake I wish to correct. Once I do determine the merge point, if I find any commits made on master after the merger that really should be on the beta branch, what's the best way to move them over? share|improve this question Try log --all --graph --pretty=tformat:'%Cred%h%Creset -%C(yellow)%d%Creset%s %Cgreen(%an %cr)%Creset' --abbrev-commit --date=relative (I personally alias it), it will display the commits of all the branches and their date, and also where did the merges happen. –  Samy Dindane Jul 19 '12 at 15:32 add comment 2 Answers up vote 3 down vote accepted You need to find the commit of the merge, git merge-base tells you the commit where you can do the merge. It basically is the last commit that exists in those two branches. The merge commit exists in your master branch only, unless you created a new branch after the merge, but that's not relevant here. :) To find the merge commit try: git log master ^beta --ancestry-path --merges The needed commit is the very last commit. But please read up on Linus' write up: http://www.kernel.org/pub/software/scm/git/docs/howto/revert-a-faulty-merge.txt share|improve this answer add comment Also look at http://sethrobertson.github.com/GitFixUm/ which walks you through almost any git problem, including pushed merges. However...pushed merges have no easy solution. share|improve this answer add comment Your Answer
6
8
Capistrano and database.yml June 3rd, 2009 at 7:06 am • permalink10 comments Capistrano logoLast week, an user asked the Capistrano mailing list about database password best practices. This reminded me that I never posted here a Capistrano recipe I created almost one year ago to solve exactly this problem. Which problem? Imagine you need to deploy a new Rails application. As you probably know, Rails stores all the database configurations in a single file called config/database.yml, including database authentication credentials. This file usually lives in your repository along with all your application code base. However, exposing real world passwords to all developers with read access to the repository can lead to major security problems. It’s likely you don’t want to store sensitive data in your repository, thus you need to automatically generate the config.yml file somehow on deploy or on setup. If you are using Capistrano to deploy your Rails application, you can ask Capistrano to generate and upload the file for you. Let me show you how. One Problem, Many Solutions As usual, one problem comes with many different solution. That’s good because this is definitely better than a problem without a reasonable solution… do you agree? As Robert James pointed out in its email, there are at least 3 different approaches to solve this issue. • You can store sensitive data in your Capistrano deploy script. The downside of this solutions is that every developer with read access to the Capistrano script have access to the data as well. • You can store sensitive data on your server, but it requires some kind of manual setup. • You can use a password-less system, but this is probably the worst idea ever. Don’t misunderstand me, shared keys are a wonderful authentication system and I widely use them, but I didn’t find an effective alternative for database authentication. My recipes combines the second choice with some additional features, taking advantage of Capistrano ability to execute commands simultaneously on multiple servers. Capistrano database.yml task The recipe is available as a gist. I think it is fairly self-explanatory and the documentation section at the beginning should give you a good overview of how it works. # = Capistrano database.yml task # Provides a couple of tasks for creating the database.yml # configuration file dynamically when deploy:setup is run. # Category:: Capistrano # Package:: Database # Author:: Simone Carletti # Copyright:: 2007-2009 The Authors # License:: MIT License # Link:: http://www.simonecarletti.com/ # Source:: http://gist.github.com/2769 unless Capistrano::Configuration.respond_to?(:instance) abort "This extension requires Capistrano 2" Capistrano::Configuration.instance.load do namespace :db do desc <<-DESC Creates the database.yml configuration file in shared path. By default, this task uses a template unless a template called database.yml.erb is found either is :template_dir or /config/deploy folders. The default template matches the template for config/database.yml file shipped with Rails. When this recipe is loaded, db:setup is automatically configured to be invoked after deploy:setup. You can skip this task setting the variable :skip_db_setup to true. This is especially useful if you are using this recipe in combination with capistrano-ext/multistaging to avoid multiple db:setup calls when running deploy:setup for all stages one by one. task :setup, :except => { :no_release => true } do default_template = <<-EOF base: &base adapter: sqlite3 timeout: 5000 database: #{shared_path}/db/development.sqlite3 <<: *base database: #{shared_path}/db/test.sqlite3 <<: *base database: #{shared_path}/db/production.sqlite3 <<: *base location = fetch(:template_dir, "config/deploy") + '/database.yml.erb' template = File.file?(location) ? File.read(location) : default_template config = ERB.new(template) run "mkdir -p #{shared_path}/db" run "mkdir -p #{shared_path}/config" put config.result(binding), "#{shared_path}/config/database.yml" desc <<-DESC [internal] Updates the symlink for database.yml file to the just deployed release. task :symlink, :except => { :no_release => true } do run "ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml" after "deploy:setup", "db:setup" unless fetch(:skip_db_setup, false) after "deploy:finalize_update", "db:symlink" The following instructions basically represents the documentation you can find at the top of the original recipe. This extension requires the original config/database.yml to be excluded from version control. You can easily accomplish this by renaming the file (for example to database.example.yml) and configuring your SCM in order to ignore the database.yml file. The following example demonstrate how to rename the file and ignore the original one with Subversion. $ svn mv config/database.yml config/database.example.yml $ svn propset svn:ignore 'database.yml' config If your repository is powered by Git, type the following commands. $ git mv config/database.yml config/database.example.yml $ echo 'config/database.yml' >> .gitignore If you don’t want to rename your file, there’s an other alternative. You can customize the recipe in order to force Capistrano to delete database.yml file after a successful deploy:code_update and before running the database:symlink task. Include this file in your deploy.rb configuration file. Assuming you saved this recipe as capistrano_database.rb: require "capistrano_database" Now, when deploy:setup is called, this script will automatically create the database.yml file in the shared folder. Each time you run a new deploy, this script will also create a symlink from your application config/database.yml pointing to the shared configuration file. In case you need to run deploy:setup again and you don’t want Capistrano to ask for a database password, set the skip_db_setup option to true. This is especially useful in combination with capistrano multi-stage recipe when you already setup your server and you share the same environment across all the stages. $ cap deploy:setup -s "skip_db_setup=true" Custom template By default, this script creates an exact copy of the default database.yml file shipped with a new Rails 2.x application. If you want to overwrite the default template, simply create a custom Erb template called database.yml.erb and save it into config/deploy folder. Although the name of the file can’t be changed, you can customize the directory where it is stored defining a variable called :template_dir. # store your custom template at foo/bar/database.yml.erb set :template_dir, "foo/bar" # example of database template base: &base adapter: sqlite3 timeout: 5000 database: #{shared_path}/db/development.sqlite3 <<: *base database: #{shared_path}/db/test.sqlite3 <<: *base adapter: mysql database: #{application}_production username: #{user} password: #{Capistrano::CLI.ui.ask("Enter MySQL database password: ")} encoding: utf8 timeout: 5000 Because this is an Erb template, you can place variables and Ruby scripts within the file. For instance, the template above takes advantage of Capistrano CLI to ask for a MySQL database password instead of hard coding it into the template. This solves the original problem of storing sensitive data in your repository or deploy script. 1. Capistrano: Managing an uploads folder 2. Running Capistrano with Passenger (mod_rails) 3. Capistrano: Executing a command as root without using sudo 4. How to restart God when you deploy a new release via Capistrano 5. Capistrano: File Transfer actions Filed in Programming • Tags: , , , , , Mark says: Hey, this was exactly what I was looking for! Thank you very much for sharing this recipe. Looking forward to more articles about Capistrano. :) Blaenk says: Thanks for the great script. For some reason though, the external file method did not work for me. The file was copied and all perfectly fine, but when I viewed it on my server, the variables were never expanded (It still showed #{var} and it didn’t ask me for the password). After copying/pasting the contents of the file into the HERE document directive, everything worked fine. Thanks again I appreciate it. kiran says: This was an awesome post and was really useful in setting up my database.yml. Thank you so much. Anuradha Mulukutla says: This was extremely helpful, thanks! Just a small correction – for the substitutions (including the prompts for the password) to work correctly in the ERB file, I used ruby expression tags <%= … %> rather than the #{} string interpolation. Pratik Khadloya says: Very very helpful. Thanks a lot! Josh says: Hello! This looks really helpful, but where do I have to save the capistrano_database_yml.rb file? When requiring it in deploy.rb, I’m getting “`gem_original_require’: no such file to load — capistrano_database_yml.rb”… You can save it where you want. For example, I usually store all deploy files in a /config/deploy folder. Make sure the file is in your $LOAD_PATH. Justin Sarma says: Nice tutorial! You can enhance the security a bit by using password_prompt instead of ask. That way no one will see what you type on the screen: Capistrano::CLI.password_prompt(“Enter MySQL database password: “) Also, it’s a little off-topic, but the security hole that worries me much more than this is how database.yml has to store DB passwords on the server, so if the web server’s apache user account is compromised, so is the DB. You could encrypt the password in another file only readable by the Apache user, but the key to unlock it still has to be readable to the Apache user, so it’s ultimately security through obscurity. This may simply be an unsolvable problem. Gazzang’s ezNcrypt claims it can solve it with remote key servers, but that’s quite expensive. Any ideas, anyone? Unfortunately it’s the way Rails works by default. Andrew says: I would like to run a script during deployment — and the script needs to access the database. So I’d like to access database.yml during the deploy and be able to pull out the database access information so as to pass it to the script. Is there a way to do this? At the moment I use a .my.cnf file, so the script works with that information, but this is not optimal since I am storing the same information in two places. And it isn’t the rails way… Add a Comment Follow Me Random Quote
19
6
Take the tour × Possible Duplicate: Python, opposite function urllib.urlencode I have a url string which is already formatted with key=value pairs and &'s in between, as created by urllib's urlencode function. Is there a standard Python library utility to reverse this process? That is: Given a string representing a url, return a string containing the base url and a dictionary containing the key-value pairs in the url. I can cook up a simple solution on my own that does this for reasonable urls, but I imagine weird things can happen with an arbitrary url. So is there a standard library function that does this safely? share|improve this question add comment marked as duplicate by Waleed Khan, phihag, unutbu, JeremyKun, Burhan Khalid Oct 28 '12 at 21:19 1 Answer up vote 4 down vote accepted The built-in urlparse does what you want: >>> bits = urlparse.urlparse('http://www.example.com/foo?bar=zoo&a=b') >>> bits.query >>> urlparse.parse_qs(bits.query) {'a': ['b'], 'bar': ['zoo']} share|improve this answer Perfect! Thanks for the quick response. –  JeremyKun Oct 28 '12 at 21:12 add comment
6
6
Take the 2-minute tour × I have 4 text files that I want to read and find the top 5 most occurring names of. The text files have names in the following format "Rasmus,M,11". Below is my code which right now is able to call all of the text files and then read them. Right now, this code prints out all of the names in the files. def top_male_names (): for x in range (2008, 2012): txt = "yob" + str(x) + ".txt" file_handle = open(txt, "r", encoding="utf-8") line = file_handle.readline().strip() while line != "": print (line) line = file_handle.readline().strip() My question is, how can I keep track of all of these names, and find the top 5 that occur the most? The only way I could think of was creating a variable for each name, but that wouldn't work because there are 100s of entries in each text file, probably with 100s of different of names. share|improve this question Look into collections.Counter in the standard library. –  Wooble Nov 25 '13 at 14:34 There is no need to file_handle.seek(0). Delete that line with no fear. –  Roberto Bonvallet Nov 25 '13 at 14:41 +1 for Counter, see docs.python.org/2/library/collections.html#counter-objects which has an example almost identical to what you are trying to do. –  jonrsharpe Nov 25 '13 at 14:41 add comment 2 Answers up vote 1 down vote accepted This is the gist of it: from collections import Counter counter = Counter() for line in file_handle: name, gender, age = line.split(',') counter[name] += 1 print counter.most_common() You can adapt it to your program. share|improve this answer add comment If you need to count a number of words in a text, use regex. For example import re my_string = "Wow! Is this true? Really!?!? This is crazy!" words = re.findall(r'\w+', my_string) #This finds words in the document >>> words ['Wow', 'Is', 'this', 'true', 'Really', 'This', 'is', 'crazy'] "Is" and "is" are two different words. So we can just capitalize all the words, and then count them. from collections import Counter cap_words = [word.upper() for word in words] #capitalizes all the words word_counts = Counter(cap_words) #counts the number each time a word appears >>> word_counts Counter({'THIS': 2, 'IS': 2, 'CRAZY': 1, 'WOW': 1, 'TRUE': 1, 'REALLY': 1}) Now reading a file : import re from collections import Counter with open('file.txt') as f: text = f.read() words = re.findall(r'\w+', text ) cap_words = [word.upper() for word in words] word_counts = Counter(cap_words) Then you only have to sort the dict containing all the words, for the values not for keys and see the top 5 words. share|improve this answer add comment Your Answer
8
4
Take the 2-minute tour × I have a text file which contains 2 columns separated by a tab, containing some data that I would like to read into arrays and perform some simple operations for instance plot the data. The data in the second column is in scientific notation and can takes extremely small values such varying from order of magnitude 10e-27 10e-50. For instance here is a sample of the data 0.00521135 -1.197189e-31 0.00529274 -7.0272737e-32 0.00530917 -6.0163467e-32 0.00532565 -4.9990405e-32 0.00534218 -3.9747722e-32 0.00535876 -2.9457271e-32 0.0053754 -1.9094542e-32 0.00539208 -8.6847519e-33 0.00540882 1.7851373e-33 0.00542561 1.2288483e-32 0.00544245 2.2850705e-32 0.00545934 3.3432858e-32 0.00547629 4.4084594e-32 0.00549329 5.4765499e-32 0.00551034 6.5491709e-32 Here is what my code looks like : import numpy as np import matplotlib.pyplot as plt with open('data.dat', 'r') as f2: lines = f2.readlines() data = [line.split()for line in lines] data2 = np.asfarray(data) x1 = data2[:,0] y1 = data2[:,1] plt.plot(x1, y1) I have used this code to test on sample data (in .dat format) files and it seems to work fine, however when I run this code on my data set it gives me the following error. Traceback (most recent call last): File "read_txt_col.py", line 17, in <module> data2 = np.asfarray(data) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages /numpy/lib/type_check.py", line 103, in asfarray return asarray(a,dtype=dtype) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/numeric.py", line 235, in asarray ValueError: setting an array element with a sequence. Could someone please help!! share|improve this question There are plenty of SO answers on this: stackoverflow.com/questions/4674473/… stackoverflow.com/questions/13310347/… THe basic problem is that you appear to have sequences of different lengths. I would try to find the problem in your data by binary search: load half the data. Successful? Add half of the data and test that. Successful? Add another half, etc. –  hughdbrown Sep 26 '13 at 8:59 add comment 1 Answer Don't reinvent the wheel!, it would be much more easy to use numpy.loadtxt: >>> import numpy as np >>> import matplotlib.pyplot as plt >>> data = np.loadtxt('data.dat') >>> x1 = data[:,0] >>> y1 = data[:,1] >>> plt.plot(x1, y1) >>> plt.show() enter image description here share|improve this answer For some more power, especially in error checking it is worth mentioning numpy.genfromtxt() –  Greg Sep 26 '13 at 11:55 add comment Your Answer
14
8
Take the 2-minute tour × I have hunted around various other posts and although there are some useful tips I haven't found a similar problem to mine so I thought I would ask. I have generated the following list: data2 = ['AN1_OUT,24','AN2_OUT,13','AN3_OUT,14','AN4_OUT,15'] What I want to do is identify the setting (AN1_OUT etc..) and the value (2,13 etc...) that accompanies it. I have successfully identified the setting by using the good old 'if-elif' as I only need to know this setting, however, I now need to separate out the value. So far I am using: data3 = re.findall('[0-9]{2}',data2[i]) byte1 = map(lambda n: int(n[:2]),data3) This is in a for loop that runs through all of the elements in the data2 list (4 in this example). for each 'i' I am getting the following: I know this is what I would expect, however, the problem arises when the value is a single digit such as: In this case I miss that value and it is not printed. I tried changing the regex in the data3 function to: data3 = re.findall('[0-9]{1,2}',data2[i]) However the problem with this is that it picks up the digit in AN1_OUT, AN2_OUT etc.. so I end up with: I have looked at various different ways to solve it but it is proving very elusive. Any help would be appreciated. share|improve this question Why not just .split(',')[:-1] to get everything after the comma? –  TyrantWave Sep 19 '13 at 16:32 add comment 3 Answers up vote 2 down vote accepted Append $ at the end to make it match only at the end of the input string: You can use \d instead of [0-9]: To avoid escape use raw string (r'raw string'): >>> re.findall(r'\d{1,2}$', 'AN3_OUT,14') >>> re.findall(r'\d+$', 'AN3_OUT,14') share|improve this answer Hey falsetru, that works perfectly, thanks for the detailed explanation. –  AimSkyward Sep 20 '13 at 7:14 add comment You can use look-behind to fetch the digit preceded by comma. Also, you can use [0-9]+ instead of [0-9]{1,2}, id you can have more digits. data3 = re.findall(r'(?<=,)[0-9]+',data2[i]) share|improve this answer add comment You can parse the strings you've described without using regular expressions. Just split on the comma! for item in data2: setting, value = item.split(',') if setting == 'AN1_OUT': value = int(value) # do stuff with value share|improve this answer add comment Your Answer
6
4
Take the 2-minute tour × Hi I ve a script in iron python where a variable mite contain special characters. Ex name- megha_lohit url - http://url.com if name == megha_lohit: print 'success' else raise testcaseexception(failed) Here the code doesnt pass the if loop and enters teh else part failing the test case , even though name = megha_lohit(right hand side expression), same case is with url too. Could somebody help me out share|improve this question We'll be happy to help once you translate the gibberish to English :P –  Aurum Aquila Apr 26 '11 at 4:25 @Aurum aquila:its english in simple words...i ve clearly mentioned tat it is not able to accept special characters like: " :" "`" "_" (colon, escape , underscore)i hope u got it now!!!! –  meghana Lohit Apr 26 '11 at 7:12 1 Answer 1 up vote 0 down vote accepted By design variable names cannot contain ":" and "`". Underscores are ok. Maybe your problem is something else in your code. IronPython 2.7 ( on .NET 4.0.30319.1 >>> a_foo= "hello" >>> a:foo= "hello" File "<stdin>", line 1 a:foo= "hello" SyntaxError: unexpected token ':' >>> a`foo= "hello" File "<stdin>", line 1 a`foo= "hello" SyntaxError: unexpected token '`' share|improve this answer Hi, There is a method which will retrieve a value and return it to the variable assigned ex: title = self.GetTitleOfTheMovie() which may return a value like this "The Ranger:Part-2". When i compare this value with the if loop, it says the strings are not equal. Ex if title == "The Ranger:Part-2" print 'success' else print 'failure'. It prints failure. What mite be the reason for it not to pass the if loop even though the title value and right hand side expression are equal??? Here title contains the special character ":" –  meghana Lohit Apr 27 '11 at 4:08 If I code your example up it works. So there must be a difference between your values. Perhaps one has extra whitespace, a return character, non-breaking space - something. If you are still stuck, post a new questions with more code and sample information. –  WombatPM Apr 27 '11 at 12:56 Your Answer
7
6
Sign up × How do I find out what packages have been installed since the OS was installed? I do not want to know all packages installed, only those that were not part of the initial OS install, and have been explicitly installed afterwards. For the sake of this question, lets assume a fresh install, as I imagine distro upgrades would complicate matters. I would prefer to use command line, but a GUI solution would be OK if a command or script is not available. I've had a quick look at the man pages of dpkg and aptitude, but didn't see anything obvious. Also, the output of apt-cache show package-name or dpkg -s pkg-name doesn't seem to give any dates that can be compared against the date of OS installation (which I would have to work out how to get too). I have logwatch on a server that sends daily notifications of what has been installed. My guess is that it parses dpkg.log. I'm not sure this method would be a solution, as many of the install entries may have been logrotated out, especially on older systems. And ideally this should work for any system, desktop or server. It would also be great if the output could include the version of the package currently installed, but that may be asking too much, and I can always script it later once I have the package names. share|improve this question I don't want to list 'all' installed packages, only those installed after the OS was installed. For example, the dpkg --get-selections | grep -v deinstall command outputs packages like xorg and wget - which would have been part of the initial install. I will edit the question. –  drgrog May 12 '14 at 8:59 a diff between dpgk --get-selections 's output and this file will do something what you're looking for –  Ayush Shanker May 12 '14 at 9:07 Thanks. I have found a more accurate duplicate at… - and the answer is the closest so far, the manifest method seemed to give me everything on one machine (as if comm wasn't working), plus aptitude was not installed on another machine. –  drgrog May 12 '14 at 14:44 @karel the answer by bci, using the history.log, will not give you a full list if the logs do not go as far back as the original install –  drgrog May 12 '14 at 14:53 @drgrog I can understand your point of view. What you have done is an improvement on answers that have already been posted by others and not a duplicate and your question is a worthy question and not a duplicate. –  karel May 12 '14 at 16:44 2 Answers 2 All Ubuntu ISO ([UKLX]buntu/Ubuntu-gnome) comes with .manifest file that contains the list of all pre-installed packages in the ISO. You can find those manifest files in the same download dir as those ISO on any Ubuntu ISO mirrors. Take the list of available Ubuntu releases as an example; if you have Trusty 64-bit for example, the manifest link would be So once you have this file, just compare the package listing in it against the listing of all installed packages in your Ubuntu using comm command $ curl -O $ comm -23 <( dpkg --get-selections | awk '$2 ~ /^(install|hold)/ { print $1 }' | sort ) \ <( awk '{ print $1 }' ubuntu-14.04-desktop-amd64.manifest | sort ) To explain what the comm does, it takes input from 2 files - first one supplies the list of all currently installed packages and the second one the manifest file. The -3 opt suppresses lines that both files have and -2 suppresses lines that only the second file (manifest file, that is) has. So in the end your output contains only lines that only file one has and that gives you packages that you installed manually since the OS was installed. So there you have it. If you'd also like to see the package version next to package name in the output, as Slyvain Pineau pointed out, pipe the comm command above to xargs dpkg-query -W -f='${binary:Package} ${Version}\n' so it becomes Or alternatively, with awk entirely, This too give the same result as command above awk 'FNR==NR {arr[$1];next} !($1 in arr) { print $0 }' ubuntu-14.04-desktop-amd64.manifest <( dpkg-query -W -f='${binary:Package} ${Version}\n' ) See this link for explanation on how the awk cmd work share|improve this answer @Sylvain Pineau - dpkg-query doesn't take input from stdin –  Flint May 12 '14 at 11:27 Sorry. Next time I'll double check. It should be [...] | xargs dpkg-query -W -f='${binary:Package} ${Version}\n'. Feel free to edit your answer with the version addition :) –  Sylvain Pineau May 12 '14 at 11:36 The initial-status.gz and dpkg-query method from gives the most accurate and concise list for my needs. comm -13 \ <(gzip -dc /var/log/installer/initial-status.gz | sed -n 's/^Package: //p' | sort) \ <(comm -23 \ <(dpkg-query -W -f='${Package}\n' | sed 1d | sort) \ <(apt-mark showauto | sort) \ Why I like it, and not the others: The manifest comparison method from Flint's answer and from includes many dependencies and other packages possibly marked as 'required' and installed automatically. For example, it lists libvlc5 and vlc-data, whereas the method above only lists vlc. The history.log method from will not list all packages if the logs do not go back as far as the release install. It also contains a lot of upgrade commands that would need to be filtered out. The dpkg --get-selections method, which is an accepted answer from a similar question, lists all packages and dependencies, including those installed with the release. It does not list only those installed explicitly. share|improve this answer Your Answer
10
3
Take the 2-minute tour × This question already has an answer here: hey, I'm trying to programming a crossword creator. using a given dictionary txt file and a given pattern txt file. The basic idea is using DFS algorithm. the problem begin when the dictionary file is v-e-r-y big (about 50000 words). then i recive the : Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded i know that there is a part in my program that wastes memory, but i don't know where it is, how to find it and how to fix it share|improve this question marked as duplicate by ThiefMaster Mar 18 '13 at 23:11 "How to find" is covered by this related question: stackoverflow.com/questions/2840421/… –  meriton Jun 6 '10 at 21:26 5 Answers 5 Does it really waste memory ? If you're loading a sizeable dictionary, then you may simply want to increase the JVM memory settings (the JVM has a maximum memory allocation - dependent on your platform, and configurable). $ java -Xmx512m .... would increase the maximum memory allocation of the JVM to 512m. If you think you have a memory leak (garbage collection not kicking in due to references not being released) then a profiler such as YourKit may be of use. Note that this isn't free, but the trial version may help you solve your problem. share|improve this answer hey, i already tried to increase the JVM memory.. Do you have maybe another advise? –  or.nomore Jun 6 '10 at 17:48 Have you tried profiling it ? –  Brian Agnew Jun 6 '10 at 17:54 sorry for my ignorance, what is profiling? –  or.nomore Jun 6 '10 at 18:18 Running the program with some sort of tool (e.g. YourKit) and getting statistics/metrics out such that you can determine where the bottlenecks (e.g. memory consumption) are. –  Brian Agnew Jun 6 '10 at 18:28 To solve this problem ( in linux based os ) do following 1) increase memory (so that this problem don't come frequently ) by configuring "domain.xml" in search for set it to higher value eg- 198m or 256m 2) kill the glassfish process to free the port on which it was running ( in my case it was 8686) open terminal (in linux based os) and type - sudo netstat -npl | grep 8686 this will result in something like.. tcp6 0 0 :::8686 :::* LISTEN 3452/java next use kill -9 3452 to kill that process ( 3452 in this case ) Now try to start glassfish, it should start. share|improve this answer Some Times, I have seen people initialize variables in loop like String s="tttt"; This should be avoided as it waste lot of memory. share|improve this answer It doesn't necessarily waste memory (and your example definitely doesn't, as string literals are pooled). It can even help as garbage collection of young objects is cheaper than collecting old objects. –  gustafc Jun 6 '10 at 18:35 Thanks a lot..this is something i wasn't not knowing.It really makes my day. –  Shashank T Jun 6 '10 at 18:49 This is a tricky problem. I've run into it once or twice, and increasing heap size doesn't help. Solving it with VM settings - you may even want to decrease the heap size (it once worked for me). You may also want to test different garbage collectors, I've had success using the G1 collector. General advice on how to avoid this error is also hard (or so it seemed to me when I researched this matter to solve my own problems). High infant mortality is probably good, since young objects are cheaper to collect than old ones. share|improve this answer large dictionary...mmm.. is there an absolute requirement of storing that directly in the memory of the jvm? A lazy chap like myself would store this in a database (in-memory even perhaps? - hypersonic for example), transfer the responsibility of searching through a list to the database while my program worked on creating interesting symmetric black and white square combinations :) Just a thought though. share|improve this answer
8
4
Forgot your password? Comment: Mint+Mate or CentOS (Score 1) 573 by doodleboy (#43266521) Attached to: Ask Slashdot: New To Linux; Which Distro? By which I mean, a distro that runs Gnome2. I've been using Linux as my primary desktop OS since sometime in the late 90's and I actually work as a shell programmer. I am not interested in using some new UI that is designed to run on a tablet, or that is written by some cabal of out of touch developers for their own masturbatory purposes. I want something that is easy to install that I don't have to waste a lot of time dicking around with. I assume most other people who have lives feel the same way. My 2 cents: CentOS: A clone of Redhat Enterprise Linux. It is quite stable but does not have quite the same selection of packages as Ubuntu and its derivatives like Mint. Also, the software tends to be lag a bit behind faster churning distros like Ubuntu. But if you don't care about living on the bleeding edge, CentOS is for you. Mint+Mate: An Ubuntu derivitave that runs the Mate UI, which is a fork of Gnome2. I'm using it now on my home PC. It's fast enough for me and I have it set up so that it looks very similar to the way I had 10.04. So far I have had zero problems with it. In short, if you want to be on the bleeding edge and don't mind a few bugs, get Mint+Mate. Otherwise, get CentOS. Comment: The case for lower resolution (Score 1) 375 by doodleboy (#42906217) Attached to: Ask Slashdot: What Is Your Favorite Monitor For Programming? Folks get all exited about having the highest possible resolution, but that is only part of the story. I have 2 x Samsung p2770fh 27" 1920x1080 monitors. They're discontinued now, but 2 years ago I paid $280 each at the local Costco. (I would suggest buying monitors locally so they can be returned if you get dead pixels.) Anyway, about that resolution. I'm 48 years old and my eyeballs don't work as well as they used to. I have a smokin' work-issued laptop, a Lenovo w520. I love that I can run multiple VMs at once on the thing, but I find myself squinting at it because of the higher pixel density. But at home on the 27's everything is nice and big and easy to read, even if I'm leaned back in my chair. Otherwise the screens are nice and bright and text is very easy to read. Video looks great. For less than $600 I am a happy camper. Comment: Re:Atlas Shrugged (Score 1) 700 by doodleboy (#41644347) Attached to: Ask Slashdot: What Books Have Had a Significant Impact On Your Life? I don't say this to be a smart ass, so please don't take it that way, but perhaps it was that simple to you because you read it when you were 16? Mind you, my youngest child is older than that and I spent half of my life overseas in the Army, so I am neither young nor naive. Give it another shot. You may be surprised. I did read it again about 10 years ago, 20 years after the first go-round and after picking up a BA in philosophy and literature. It was a remarkably different experience from being a 16 year-old fanboy. The book is not very well constructed and Galt's speech, nearly a book in itself, was nearly impossible to get through. If I was going to recommend any Rand book it would be The Fountainhead, because it gets the basic message across without all the interminable editorializing. Comment: Re:Atlas Shrugged (Score 3, Interesting) 700 by doodleboy (#41639301) Attached to: Ask Slashdot: What Books Have Had a Significant Impact On Your Life? Most of the people who criticize Atlas Shrugged haven't read it, even if they say they have. It's a great book. I second the recommendation! I read Atlas Shrugged and to my knowledge all of Ayn Rand's other published works. In fact I thought she was the shiznit when I was 16. It all seemed so simple: these people over here are good, and those other people over there are evil. However, I have come to understand real life is a good deal more complex than that, and the binary distinctions favoured by ideologues like Rand in no way correspond with reality. I have come to believe that any philosophy based on hate is fundamentally untenable. Comment: rsync scripty goodness (Score 1) 304 by doodleboy (#39536693) Attached to: Ask Slashdot: It's World Backup Day; How Do You Back Up? I haven't bothered with offsite backups. I don't need to because I live in Florida and it's not like we ever get hurricanes or anything like that. I have a 3ware raid card in my 10.04 box with 4 drives in raid 5, as well as an eSATA drive. I export a TB of the RAID array and a TB from the iSCSI drive via iSCSI to two 2k8 servers running in Virtualbox VMs. In the Windows VMs, DFS mirrors the data to the two mountpoints. I export those shares to a Z: drive which maps on login. I set up the free MicrosoftSyncToys powertool to mirror the local My Documents directories to the Z: drive. When SyncToy is run, and the data is backed up in two places. I have another esata drive which mirrors my home partition every night. This is slightly complicated because I have a couple dozen virtual machines that could be running (it's usually less than 10), so what I wanted was a way to pause any VMs that might be running, back everything up, then unpause. Here's the script I wrote to do that. # nightly_backup: Script to pause any virtual machines that are running, # do an rsync backup, then unpause the virtual machines. Set the SRCE # and DEST variables below, as well as the USER variable. Script assumes # that $DEST is a separate partition. If this is not the case for you, # comment out the line _mount_check below. # Sample cron entry: # 30 04 * * * /usr/local/bin/nightly_backup &>>/var/log/nightly_backup.log # Sample /etc/logrotate.d/nightly_backup file # /var/log/nightly_backup.log { # monthly # missingok # rotate 4 # compress # } # --exclude-from file syntax: # Copy directory but not its contents: # + Cache/ # - **/Cache/** # Do not copy (file or directory) # - .gvfs # $Id: nightly_backup,v 1.1 2011/12/03 19:23:15 doodleboy Exp kevin $ ARGS="-aHS --delete --stats --exclude-from=/usr/local/bin/rsync_exclude" # Function to pause or resume running virtual machines _pause-resume() {         VMS=$(su - $USER -c "vboxmanage --nologo list runningvms")         if [ -n "$VMS" ]; then                 printf "$VMS\n" | while read VM; do                         VM=${VM%% \{*}                         printf "Running $ARG on $VM...\n"                         su - $USER -c "vboxmanage --nologo controlvm $VM $ARG"                 printf "No VMs are running.\n" # Abort backup if $DEST partition is not mounted _mount_check() {         if mount | grep -w "$DEST" &>/dev/null; then                 printf "$DEST is mounted. Proceeding with backup.\n"                 printf "$DEST is not mounted. Aborting backup.\n"                 printf "*** $(date): Aborting nightly backup ***\n\n"                 exit 1 # Start banner printf "*** $(date): Starting nightly backup ***\n" # Make sure $DEST is mounted # Comment out _mount_check if $DEST is not a partition # Pause virtual machines _pause-resume pause # Flush pending writes sleep 3 # Do the backup # Resume virtual machines _pause-resume resume # Exit banner printf "*** $(date): Finished nightly backup ***\n\n" I wrote another script to email me the status of my raid array every night. Admittedly this is only useful if you have a 4-drive 3ware card, but it could be adapted to other hardware. Here it is: RAID=$(tw_cli /c4 show) U0=$(echo "$RAID" | awk '/^u0/ {print $3}') P0=$(echo "$RAID" | awk '/^p0/ {print $2}') P1=$(echo "$RAID" | awk '/^p1/ {print $2}') P2=$(echo "$RAID" | awk '/^p2/ {print $2}') P3=$(echo "$RAID" | awk '/^p3/ {print $2}') BB=$(echo "$RAID" | awk '/^bb/ {print $4}') for status in "$U0" "$P0" "$P1" "$P2" "$P3" "$BB"; do     if [ "$status" = "OK" ]; then         SUBJECT="RAID Status OK"     elif [ "$status" = "VERIFYING" ]; then         SUBJECT="ISSUES with RAID Array!!!" catEOF | mailx -s "$SUBJECT" someone@somesite.com & The fortune for today is: Comment: Re:ltsp with fat clients (Score 1) 202 Well, you can PXE boot LTSP over wifi if you have a wireless bridge. It's not exactly reliable though, at least it wasn't when I tried it last year. Where I work we have 300 remote locations running LTSP on lucid. One server at each location, perhaps as many as a dozen thin clients using PXE boot. We built our own update mechanism, where the LTSP servers rsync a directory tree that contains the updates. Anything new, they run the update. If an update fails for whatever reason they send an email back to hq. It's been working fairly well for us. LTSP enabled us to put a modern Linux desktop with Firefox, OO.org, etc, on the desktop of every underpowered thin clients that we own. This saved us from having to obsolete a big chunk of our infrastructure, probably a couple million in new hardware and depreciation costs. We used a Clonezilla cluster to build the disk images. We wrote a config script that configured the base images (hostname, network, etc) for each location. It was a big effort but it went well. Comment: Not Really Possible to go Paperless (Score 1) 311 by doodleboy (#39008863) Attached to: Ask Slashdot: How To Go Paperless At Home? If it's more work to save a doc in a paperless format, or if it costs more, then it isn't practical and doesn't make a lot of sense. Also, if you are all digital and a little lazy about backups, you're only a disk crash away from disaster. I like having paper copies of important stuff. I do print most everything double-sided. This alone will save a huge amount of paper. Duplex printers aren't nearly as expensive as they used to be. I have a samsung clp-620nd, a networked color duplex laser printer. It's fantastic for the money (about $300), but I'm sure there are others out there that would work just as well. If I do need to scan, I have a cheap HP j4550 multifunction inkjet. I never bothered buying new ink for it, but I do use the scanner. Normally I'll import into SimpleScan and output to PDF. SimpleScan works surprisingly well. I also print to PDF for receipts and the like if I want to keep a digital copy. If it's important I'll also print a copy and put it in the file cabinet. My thought on scanning vs printing is, if it's important then do both. Don't keep anything that matters in just in one place. Comment: Been there (Score 1) 315 by doodleboy (#38405828) Attached to: Ask Slashdot: Good Metrics For a Small IT Team? We had a new IT director show up a few years ago that came around to talk to everyone about their hopes and dreams and all the rest of it. Because he cared about us as people. Shortly after that the IT department shrunk by a third. It's Friday. I took the night off. I will be VPNing in tomorrow to do a bunch of stuff. I have to go in on Sunday to do a bunch of other stuff I can't do remotely. Fuck this shit. Comment: Re:I know this isn't what you asked but... (Score 2) 320 by doodleboy (#37904030) Attached to: Which OSS Clustered Filesystem Should I Use? I also have a 3ware card and four 1 TB drives in RAID 5 in my 10.04 desktop PC at home. Some of that space is exported via iSCSI to a couple of Windows boxes. Then I back the RAID array up with a couple of external SATA drives. My wife thinks this is excessive, but I lost a lot of data, once, nothing critical but stuff I cared about, emails and papers from college, pics of friends and family, etc. But when the drive started throwing SMART errors I thought, yup, better go pick up a new drive soon... 3 days later, it was dead. The irony is that one of my main responsibilities at work is backups, mostly with shell scripts I wrote myself. Many of you probably have most of your important stuff on one drive that you don't back up. At the very least, pick up an external USB drive and schedule backups for anything you care about. Comment: Re:Tape can be unreliable (Score 1) 611 by doodleboy (#28749987) Attached to: Best Home Backup Strategy Now? Since when is tape unreliable? It sure can be, especially the lower end stuff like Travan. Where I work we have over 300 remote sites, which used to have TR-5 tapes and drives that failed continuously. We replaced all of them with a local rsync to a different partition with snapshots going back a week, along with a remote rsync to a bank of servers with snapshots going back a month. We had to shell out some cash for the backup servers and some dev time for the scripts, but the savings from not buying tapes paid for them fairly quickly. The local rsyncs take the place of tapes, while the remotes provide secure off-site storage. We have been able to rebuild branch office servers using data off the backup servers with no data loss and minimal downtime. Hard drives are cheap, fast and reliable. I honestly don't understand the appeal of tapes. Comment: Re:The right tools for the job (Score 1) 421 by doodleboy (#28466867) Attached to: How Do You Sync &amp; Manage Your Home Directories? At work we're starting to install Ubuntu 9.04 to dualboot with XP on upper management's laptops. Ubuntu is pretty slick these days, but there is the problem of syncing files across both operating systems. We've been kicking around the idea of using a fat32 partition to keep files on, but that sucks on many levels. Reading your post, it occurs to me that unison will do exactly what we need. I knew I came here for a reason. Comment: Re:Moving parts are the main problem (Score 5, Informative) 655 by doodleboy (#27474127) Attached to: How Do I Provide a Workstation To Last 15 Years? My full solution would be a fanless rig, with RAID 1 for full redundancy of disks so if a hard disk fails, it doesn't take your data with it, and weekly backups to DAT tape stored off-site. Then I'd use a pair of power supplies, using a diode to prevent power from one from getting into the other, and a zener diode or 78 series linear regulators to ensure a failing supply can't overpower any one line. Then, from my little power circuit, the two power supplies would feed the one motherboard, which would be underclocked at reduced voltage. It would have the highest possible amount of RAM in it, because that would reduce the writes to the hard drives. On the software side, I would consider hosting the DOS app on linux using an emulator such as dosemu or dosbox. The OP's dad would have an environment very similar to what he's using now. I would probably use Debian stable for both boxes, which has very long release cycles and is very stable. With linux comes the option to replace the DAT tapes with an off-site rsync over ssh. If the main box dies, you'd be able to just swap in the backup box in a couple of minutes. If the data set isn't very large the mirror will complete in a couple of seconds. It's very easy to do: Create a RSA public/private key pair: ssh-keygen -t rsa, press enter at the password prompts. Copy the public key to the remote box: ssh-copy-id -i ~/.ssh/id_rsa.pub remotebox. Have a nightly cron job to push the files: rsync -ave ssh --delete /localfiles/ remotebox:/localfiles. For bonux points you could even throw in snapshots. I'm backing up hundreds of partitions this way at work, each with snapshots going back a month. Tapes are slow, unreliable and expensive. I would not use them for any purpose.
8
3
Personal tools From HaskellWiki < Performance Revision as of 15:22, 10 January 2006 by Simonmar (Talk | contribs) Jump to: navigation, search Please report any overly-slow GHC-compiled programs. Since GHC doesn't have any credible competition in the performance department these days it's hard to say what overly-slow means, so just use your judgement! Of course, if a GHC compiled program runs slower than the same program compiled by another compiler, then it's definitely a bug. 1 Use Optimisation Optimise, using -O or -O2: this is the most basic way to make your program go faster. Compilation time will be slower, especially with -O2. At present, -O2 is nearly indistinguishable from -O. GHCi cannot optimise interpreted code, so when using GHCi, compile critical modules using -O or -O2, then load them into GHCi. 2 Measuring Performance The first thing to do is measure the performance of your program, and find out whether all the time is being spent in the garbage collector or not. Run your program with the +RTS -sstderr option: $ ./clausify 20 +RTS -sstderr 42,764,972 bytes allocated in the heap 6,915,348 bytes copied during GC (scavenged) 360,448 bytes copied during GC (not scavenged) 36,616 bytes maximum residency (7 sample(s)) 81 collections in generation 0 ( 0.07s) 7 collections in generation 1 ( 0.00s) 2 Mb total memory in use INIT time 0.00s ( 0.00s elapsed) MUT time 0.65s ( 0.94s elapsed) GC time 0.07s ( 0.06s elapsed) EXIT time 0.00s ( 0.00s elapsed) Total time 0.72s ( 1.00s elapsed)  %GC time 9.7% (6.0% elapsed) Alloc rate 65,792,264 bytes per MUT second Productivity 90.3% of total user, 65.1% of total elapsed This tells you how much time is being spent running the program itself (MUT time), and how much time spent in the garbage collector (GC time). If your program is doing a lot of GC, then your first priority should be to check for Space Leaks using heap profiling, and then to try to reduce allocations by time and allocation profiling. If you can't reduce the GC cost any further, then using more memory by tweaking the GC options will probably help. For example, increasing the default heap size with +RTS -H128m will reduce the number of GCs. If your program isn't doing too much GC, then you should proceed to time and allocation profiling to see where the big hitters are. 3 Unboxed types When you are really desperate for speed, and you want to get right down to the “raw bits.” Please see GHC Primitives for some information about using unboxed types. This should be a last resort, however, since unboxed types and primitives are non-portable. Fortunately, it is usually not necessary to resort to using explicit unboxed types and primitives, because GHC's optimiser can do the work for you by inlining operations it knows about, and unboxing strict function arguments (see Performance:Strictness). Strict and unpacked constructor fields can also help a lot (see Performance:Data Types). Sometimes GHC needs a little help to generate the right code, so you might have to look at the Core output to see whether your tweaks are actually resulting in the desired results. One thing that can be said for using unboxed types and primitives is that you know you're writing efficient code, rather than relying on GHC's optimiser to do the right thing, and being at the mercy of changes in GHC's optimiser down the line. This may well be important to you, in which case go for it.
6
3
Take the 2-minute tour × I recently got into trouble because of this. $sudo vim /etc/motd [sudo] password for bruce: bruce is not in the sudoers file. This incident will be reported. Is there a way to check if I have sudo access or not? share|improve this question Ask your systems administrator? –  mdpc Feb 18 '13 at 19:40 @mdpc: Is there another way besides that? –  Bruce Feb 18 '13 at 19:45 You have not mentioned if you can attain root access or not. –  mdpc Feb 18 '13 at 19:46 This has to be the first instance of seeing someone following up on "This incident will be reported". –  slhck Feb 18 '13 at 19:55 add comment 2 Answers up vote 9 down vote accepted Run sudo -v. It is usually used to extend your sudo password timeout, but can be used for determining whether you have any sudo privileges. $ sudo -v Sorry, user [username] may not run sudo on [hostname]. Man page excerpt: If given the -v (validate) option, sudo will update the user’s time stamp, prompting for the user’s password if necessary. This extends the sudo timeout for another 5 minutes (or whatever the timeout is set to in sudoers) but does not run a command. If your user is only allowed to run specific commands, this command will work, indicating you are allowed to run something with different privileges. While the message looks different when trying to execute a command you're not allowed to in this case (and no mail is sent to root), it's still possible you'll get into trouble if the admins read /var/log/secure. $ sudo ls [sudo] password for [username]: Sorry, user [username] is not allowed to execute '/bin/ls' as root on [hostname]. To find out what you're allowed to run with different privileges, you can use sudo -l. Note that this command requires you to enter your password. share|improve this answer Thanks. sudo -v works for me. The man page says I can run sudo -l as well but that asks for a password. Why is that? –  Bruce Feb 18 '13 at 20:00 @Bruce I'm guessing here, but otherwise someone (or a program you run) could find out what programs can be executed (possibly without entering password) by your current user and try to use that information maliciously. –  Daniel Beck Feb 18 '13 at 20:05 add comment Follow these steps to view the sudoers file. If you're in there, you have sudo. If not, you can add yourself. 1. su 2. visudo 3. Bottom of the file, enter your_username_here ALL=(ALL) ALL 4. Hit ESC and type :wq 5. Type exit 6. Re-run your command that needed sudo 7. Enter your password (not the root's password) share|improve this answer add comment Your Answer
7
7
Tell me more × When I connect to my Ubuntu 10.04.2 LTS server I get the following banner: 25 packages can be updated. 15 updates are security updates. However if I run package upgrades/updates nothing comes up: $ sudo apt-get update $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done After doing some research I found that issuing apt-get dist-upgrade could help but it makes no difference. Any help is welcome. share|improve this question add comment 1 Answer up vote 5 down vote accepted share|improve this answer add comment Your Answer
6
7
Tell me more × Why does Python see these classes as different data types? >>> class A: ... pass >>> class B(object): ... pass >>> a = A() >>> b = B() >>> type(A) <type 'classobj'> >>> type(B) <type 'type'> >>> type(a) <type 'instance'> >>> type(b) <class '__main__.B'> I'm pretty new. So I don't really understand why it sees all of this as different data types. They are both classes so it seems as though they should be the same. share|improve this question add comment 1 Answer up vote 6 down vote accepted You're using Python 2. Python 2 allows classes that don't inherit from object, which was added in version 2.2. They behave differently from "new-style classes" in a few ways, and you've found a couple. There's no reason for the different behaviour other than to retain backward-compatibility, that is to ensure that code written for old-style classes continues to work in new releases of Python 2. Python 3 is not backward-compatible and does not have old-style classes. If you wrote the same code in Python 3, then A would inherit from object even though you don't say so explicitly. share|improve this answer add comment Your Answer
10
6
Take the 2-minute tour × These days I design some algorithms in python, but find first two greatest value in python is too ugly and inefficient. How to implement it in a efficient or a pythonic way? share|improve this question possible duplicate of Filter max 20 values from a list of integers –  Steven Rumbalski Jun 11 '12 at 10:46 add comment 3 Answers up vote 5 down vote accepted I've found this to be consistently faster (about 2x for a list of 1,000,000 items) than heapq.nlargest: def two_largest(sequence): first = second = 0 for item in sequence: if item > second: if item > first: first, second = item, first second = item return first, second (function modified at the suggestion of MatthieuW) Here are the results of my testing (timeit was taking forever, so I used time.time()): >>> from random import shuffle >>> from time import time >>> seq = range(1000000) >>> shuffle(seq) >>> def time_it(func, *args, **kwargs): ... t0 = time() ... func(*args, **kwargs) ... return time() - t0 >>> #here I define the above function, two_largest(). >>> from heapq import nlargest >>> time_it(nlargest, 2, seq) >>> time_it(two_largest, seq) share|improve this answer You should compare with second, then first. In a 1000000 items list (unless it is sorted), most will be less than current "second", so you can avoid one comparison per item. –  MatthieuW Jun 11 '12 at 9:19 @MatthieuW: Good point! I was actually surprised that an interpreted script worked faster than any of the builtins. –  Joel Cornett Jun 11 '12 at 9:23 At least on Python 2.7, the heapq module is also implemented as an interpreted Python script, not as C code. So your result isn't that surprising. –  interjay Jun 11 '12 at 9:31 @interjay: Ah, makes sense. –  Joel Cornett Jun 11 '12 at 9:37 one can use first = second = None to include negative numbers in a clean way, doesn't work for python3 though.. –  gokcehan Dec 17 '12 at 21:50 add comment Most Pythonic way is to use nlargest: import heapq values = heapq.nlargest(2, my_list) share|improve this answer Or just use the builtin sorted. values = sorted(my_list, reverse=True)[:2] –  Christian Witts Jun 11 '12 at 7:41 @Christian: That would be both slower and (in my opinion) less Pythonic. –  interjay Jun 11 '12 at 7:41 @interjay: for small lists sort() might be faster. –  J.F. Sebastian Jun 11 '12 at 7:51 @RichardWong: On my computer I get similar speeds for 100 elements. –  interjay Jun 11 '12 at 8:12 @richardWong: use nlargest() both for readability and a better asymptotic complexity. if profiler says that for your data nlargest is a bottleneck then you could try sort() to see how it compares. –  J.F. Sebastian Jun 11 '12 at 8:14 show 1 more comment mylist = [100 , 2000 , 1 , 5] biggest = mylist[-2:] share|improve this answer -1 for suggesting sorting. This is just plain horrible. No need to sort in order to find the largest two elements. –  Michael Wild Jun 11 '12 at 7:44 @MichaelWild, its true that sorting ain't needed for n largest nos. But even nlargest says Equivalent to: sorted(iterable, key=key, reverse=True)[:n] –  tuxuday Jun 11 '12 at 7:52 @tuxuday - it is equivalent in the result, not in the performance. It uses sorted only when n>size. –  eumiro Jun 11 '12 at 8:10 add comment Your Answer
15
6
Take the 2-minute tour × Is there a way to find what path a command has had it's output redirected to (if it has been)? I tried using: ps -p PID -o cmd Thinking I could look for a > and extract the path from that, but the output doesn't have that part in it. I'm pretty sure it hasn't just been truncated. share|improve this question add comment 4 Answers up vote 2 down vote accepted You can use the proc file system /proc/self/fd for this readlink /proc/self/fd/1 for stdout or 2 for stderr. share|improve this answer add comment If you know the PID, just inspect /proc/ID/fd/1. It should be linked to the actual path: $ watch date > /tmp/1 & [1] 27346 $ ls -l /proc/27346/fd/1 l-wx------ 1 choroba users 64 2013-02-15 16:28 /proc/27346/fd/1 -> /tmp/1 share|improve this answer add comment Use the lsof (list open files) command to see what files a process has open for writing. For example: $ lsof -p 31714 bash 31714 dogbane 0u CHR 136,4 6 /dev/pts/4 bash 31714 dogbane 1w REG 8,1 15 2032202 /tmp/t The w in the FD (file descriptor) column means that /tmp/t is open for writing. share|improve this answer add comment How about it? [root@us04 ~]# ls -l /proc/14170/exe lrwxrwxrwx 1 root root 0 Feb 15 10:36 /proc/14170/exe -> /usr/sbin/httpd One more example: [root@us04 ~]# readlink -f /proc/5352/exe share|improve this answer add comment Your Answer
8
3
Take the 2-minute tour × I am building a wireless link between two points using Wireless-N standard. To determine the best position of the antennas, there is a signal strength indicator on the antennas. The problem is the link strength doesn't always correspond to link throughput. I'm looking for an offline version of http://www.speedtest.net or http://www.speakeasy.net/speedtest/ that can do continous monitoring of throughput. Ideally, I would run this software and monitor increase/decrease of throughput while repositioning the antenna. share|improve this question 4 Answers 4 up vote 3 down vote accepted Sounds like a job for iperf. Or a flood ping. share|improve this answer Flood ping is interesting, easy enough to do without any additional software. Thanks. –  Adrian Godong Dec 5 '10 at 4:41 If that works, I'd appreciate the check mark. I'm not too proud to ask. –  Jed Daniels Dec 5 '10 at 6:04 i gave you the check mark myself - as its a good answer. Biggest thing in wireless is he may want to check more than just tcp - so making sure he does some UDP testing would be wise... those tests will really show you where things are good vs bad. –  Glenn Kelley Dec 5 '10 at 6:33 iPerf would do much better than would a file on a webserver. Few reasons - including mod_throttle, bandwidth etc - coming from the server itself. Jed hit the nail on its head suggesting IPERF - here is why: Iperf is a tool to measure the bandwidth and the quality of a network link. If you would combine IPERF w/ Jperf you have a great graphical interface to use while playing with your link. A single file coming from a webserver would not be able to give you a clear picture since there is no way to test the quality of the link. The quality of the link is important not just speed. Using IPERF - you can test the Latency (response time or RTT) - by uinsg the Ping command. You can also look at any Datagram loss using an IPERF UPD test - heck... Jitter (the variation of the latency across the link) can be measured as well with an IPerf UDP test. The quality of a link can be tested as follows: - Latency (response time or RTT): can be measured with the Ping command. - Jitter (latency variation): can be measured with an Iperf UDP test. - Datagram loss: can be measured with an Iperf UDP test. Iperf will also allow you to test bi-directionally which a webserver will not. Let us know if you need help using IPERF - just respond w/ the operating system your using on each system on both sides of the link. One last note - what gear are you using - just the radios or spectrum analyzation tools? Your WIFI gear may be able to help you a great deal. I am assuming you are using 2.4GHZ yes? Reason I ask is that if you are using some of the higher frequencies (such as 5.xGHZ - which MAC's support) they do a good job at closer distances - give excellent speeds BUT hate things like walls in their way. If you have a spectrum analyzer (ps most all of Ubiquity gear www.UBNT.com have this built in) will help you a great deal. For example - you may find that channel 1 vs channel 8 is a better choice. We have some links using 2.4Ghz going over a great distance (15+ miles at some places) - and the only variation on speed is the channel - thus allowing us to break apart a great deal of interference. Speed Example 1 versus where the CCQ (link quality) is lower overall but we see more bandwidth and a better db level. Speed Example different channel In a small office - warehouse etc - channels will make a huge difference, especially if others are close this can really make a difference. The point here is simple - Location of the antenna's is not the only factor. If you would agree that using good equipment can make the world of difference than it is fair to say Using a spectrum analyzer will make a Universe of difference. Waterfall Chart: Ubiquity Waterfall This time-based graph shows the aggregate energy collected since the start of an AirView / spectrum analyzer session, over time for each frequency. The power of the energy in dBm is shown across the frequency span and one row is inserted in this graph every few seconds. It is a great thing to let this run for a long time to see what is happening on the connection from a radio perspective. Channel Usage Chart: Channel Usage Chart Wafeform Chart: Waveform Chart Example Real Time Chart: This is most likely what you want the most... Real Time Radio Chart This tool runs for under $100 online - just do a froogle search for Airview. Combined with IPERF - you can have a really great network for not much work or hassle. If this has helped - please vote this answer up a notch. share|improve this answer Put a really big data file on a local webserver, then use wget to download the file. By default, wget displays a pretty good speed indicator while it's downloading, e.g.: $ wget -O - http://xxx.xxx.xxx.xxx/really-big-file.bin >/dev/null --2010-12-04 20:31:32-- http://xxx.xxx.xxx.xxx/really-big-file.bin Connecting to xxx.xxx.xxx.xxx:80... connected. HTTP request sent, awaiting response... 200 OK Saving to: `STDOUT' 11% [===> ] 12,342,529 1.01M/s eta 89s share|improve this answer I was thinking something like this, but assuming it's 100Mbps link then 1Gb file will be done in 10 seconds. There's also disk read time which shouldn't be taken into account. –  Adrian Godong Dec 5 '10 at 4:38 @Adrian: So try a 10GB file. Or call wget over and over in a loop. Or hack together an ad hoc server with a few lines of Python to stream pseudo-random data in response to an HTTP GET request. Use your imagination! I don't understand your "disk read time" objection, though. –  Steven Monday Dec 5 '10 at 5:02 Another option is ttcp. Not as fancy as iperf, but sometimes it's all you need. share|improve this answer Your Answer
8
3
Take the 2-minute tour × Reasons for doing this aside, is there a reasonable way to convert an entire git repository to subversion? I can find only tons on information on migrating from subversion to git, and exchanging changesets between the two, but not for doing a simple conversion of the entire git repository to svn. share|improve this question Duplicated here: stackoverflow.com/questions/661018/… –  Casey Aug 10 '09 at 22:32 Sad that you had to preface with "Reasons for doing this aside" in order to prevent a flame war or such... –  SchighSchagh Nov 9 '12 at 20:15 2 Answers 2 up vote 10 down vote accepted The general problem with doing conversions this direction is that Git repositories can contain more than just a linear history of revisions, as Subversion would expect. Multiple ancestries with divergent histories and frequent merge commits are all possible, which can't be easily represented in a Subversion repository. For simple cases where you do have a linear history in your Git repository, you can use git-svn dcommit to push the lot up to an otherwise empty Subversion repository. share|improve this answer Sorry to stir this up again after a few years, but can you give a concrete example of how to do this? Say I have a clone of a git repository at ~/my-git-repo, and I want to copy the commit history into some SVN repo, say svn://foo.com/empty-svn-repo/ –  SchighSchagh Nov 9 '12 at 21:13 @SchighSchagh: Have a look at this recent question, it might be more suited to what you need: possible to recreate svn repository from (full) git-svn clone? –  Greg Hewgill Nov 10 '12 at 2:44 It's very easy to perform with SubGit. $ svnadmin create svn.repo $ subgit configure svn.repo $ nano svn.repo/conf/subgit.conf to specify a path to your bare repository (you may use "git clone --bare <URL>" if you have none locally) $ subgit install svn.repo After conversion your SVN and linked Git repository will be in sync: every Git push will be translated to SVN commit and vice versa. To break translation run $ subgit uninstall svn.repo While translation SubGit will try to preserve commit dates, tags, ignores, merges, EOLs, branches and so on, as it is possible. I can't say the same about git-svn repository. share|improve this answer Your Answer
12
3
Take the 2-minute tour × Shouldn't the list comprehension restrict the variable scope. user = <user1> project.users = [<user1>, <user2>, <user3>, <user4>] project_usernames = [user.username for user in project.users] I am generating the list project_usernames using list comprehension on project.users. But it is modifying the user to which was earlier . I am using above flow in one of my project but because of this bug it was not working. later when I changed the variable "user" in list comprehension, it worked properly. entity within <> refers to I know that the interpreter works line by line, but shouldn't the scope of variable used in list comprehension die once the iteration is over?. share|improve this question 1 Answer 1 up vote 1 down vote accepted This is a Python 2.x 'feature', where the variable you use inside of the list comprehension (in your case, user) becomes part of the surrounding scope (in Python 3, it is treated like a generator - see here for the breakdown from Guido himself). Assuming that you are iterating through your list (as opposed to needing to have everything available in memory), you could set it up like a generator by just changing the brackets to parentheses: >>> user = 'test' >>> l = ['user1', 'user2', 'user3'] >>> users = (user[4] for user in l) >>> users <generator object <genexpr> at 0x7f6a89507140> >>> user >>> for num in users: ... print num share|improve this answer Your Answer
11
6
Take the 2-minute tour × sqlstring = 'INSERT INTO {}' table = 'Product' does not result to 'INSERT INTO Product' but still 'INSERT INTO {}' why is this so? share|improve this question Using my Python 3.2 it does result in "INSERT INTO Product", not sure what the problem is on your side. –  Michael Dec 1 '11 at 0:09 2 Answers 2 up vote 5 down vote accepted Python 3.2 does this correctly: $ python3.2 Python 3.2.2 (default, Sep 5 2011, 22:09:30) [GCC 4.6.1] on linux2 >>> sqlstring = 'INSERT INTO {}' >>> table = 'Product' >>> sqlstring.format(table) What version are you using? Additional thought: Strings are immutable and do not self-modify in-place. Maybe you want: sqlstring = sqlstring.format(table) If format strings self-modified themselves in place it would be quite annoying for Python programmers, as each format string could only be used once. Sometimes we want the chance to build a nice format string then use it hundreds of times — which is easy if format() returns the result instead of modifying the format in-place. share|improve this answer python 3.2 r32:88445 Feb 11, 2011 I am using Windows 7 64x –  Lemuel Adane Dec 1 '11 at 0:13 try putting it in a method then run it. –  Lemuel Adane Dec 1 '11 at 0:14 What are you doing with the value returned by format()? Are you using it immediately, or does your method save the value somewhere for you to find later? –  Brandon Rhodes Dec 1 '11 at 0:16 format doesn't modify the string (it can't because strings are immutable). It returns a new string. You need to assign the result of the call to format: result = sqlstring.format(table) share|improve this answer Your Answer
11
9
Loïc Dachary http://dachary.org Free Software Developer Journey Mon, 25 Nov 2013 15:08:02 +0000 en hourly 1 http://wordpress.org/?v=3.0.4 Manage a multi-datacenter crush map with the command line http://dachary.org/?p=2536 http://dachary.org/?p=2536#comments Thu, 21 Nov 2013 15:28:56 +0000 Loic Dachary http://dachary.org/?p=2536 Continue reading ]]> A new datacenter is added to the crush map of a Ceph cluster: # ceph osd crush add-bucket fsf datacenter added bucket fsf type datacenter to crush map # ceph osd crush move fsf root=default moved item id -13 name 'fsf' to location {root=default} in crush map # ceph osd tree # id weight type name up/down reweight -13 0 datacenter fsf -5 7.28 datacenter ovh -2 1.82 host bm0014 0 1.82 osd.0 up 1 The datacenter bucket type already exists by default in the default crush map that is provided when the cluster is created. The fsf bucket is moved ( with crush move ) to the root of the crush map. A new rule is created to take objects in the fsf datacenter and ensure no host holds more than one copy: # ceph osd crush rule create-simple fsf-rule fsf host # ceph osd crush rule dump { "rule_id": 6, "rule_name": "fsf", "ruleset": 6, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -13}, { "op": "chooseleaf_firstn", "num": 0, "type": "host"}, { "op": "emit"}]}] A new pool is created and associated with the newly created rule ( id 6 ): # ceph osd pool create fsf 128 pool 'fsf' created # ceph osd pool set fsf crush_ruleset 6 set pool 7 crush_ruleset to 6 The OSDs are automatically added to the fsf bucket by adding the following to /etc/ceph/ceph.conf: osd_crush_update_on_start = 1 osd_crush_location = datacenter=fsf It is interpreted by the ceph-osd upstart script that is triggered when a new OSD is created or when the machine boots. # ceph-deploy osd create bm0101.the.re:/dev/sdb:/dev/sdc # ceph osd tree -13 3.64 datacenter fsf -14 3.64 host bm0101 8 3.64 osd.8 up 1 http://dachary.org/?feed=rss2&p=2536 2 Transparently route a public subnet through shorewall http://dachary.org/?p=2525 http://dachary.org/?p=2525#comments Wed, 20 Nov 2013 14:07:21 +0000 Loic Dachary http://dachary.org/?p=2525 Continue reading ]]> The is routed to a firewall running shorewall. Behind the firewall is an OpenStack cluster running a neutron l3 agent and known to the firewall as A parallel zone is defined as follows: diff -r 34984beb770d hosts +++ b/hosts Wed Nov 20 14:59:09 2013 +0100 @@ -0,0 +1,1 @@ +opens eth0: diff -r 34984beb770d policy --- a/policy Wed Jun 05 00:19:12 2013 +0200 +++ b/policy Wed Nov 20 14:59:09 2013 +0100 @@ -113,6 +113,7 @@ # If you want to force clients to access the Internet via a proxy server # on your firewall, change the loc to net policy to REJECT info. loc net ACCEPT +loc opens ACCEPT loc $FW ACCEPT loc all REJECT info @@ -124,6 +125,7 @@ # This may be useful if you run a proxy server on the firewall. #$FW net REJECT info $FW net ACCEPT +$FW opens ACCEPT $FW loc ACCEPT $FW all REJECT info @@ -132,6 +134,7 @@ net $FW DROP info net loc DROP info +net opens ACCEPT net all DROP info diff -r 34984beb770d zones --- a/zones Wed Jun 05 00:19:12 2013 +0200 +++ b/zones Wed Nov 20 14:59:09 2013 +0100 @@ -115,5 +115,6 @@ fw firewall net ipv4 loc ipv4 +opens ipv4 and net incoming packets are accepted for the subnet when targeting the loc zone which contains the subnet: ACCEPT net loc: A route is added ip r add via A ping from the firewall will show on the destination interface # tcpdump -i eth0 -n host 15:03:29.258592 IP > ICMP echo request, id 48701, seq 1, length 64 even if it timesout because the IP is not actually there # ping -c 1 PING ( 56(84) bytes of data. --- ping statistics --- The subnet must be excluded from the masquerading rules by setting /etc/shorewall/masq as follows: eth1 eth0! which says to masquerade all but the subnet that is transparently routed. The result can then be checked from a virtual machine to which an IP has been routed with: # wget --quiet -O - http://bot.whatismyipaddress.com ; echo http://dachary.org/?feed=rss2&p=2525 0 Mixing Ceph and LVM volumes in OpenStack http://dachary.org/?p=2518 http://dachary.org/?p=2518#comments Tue, 19 Nov 2013 11:14:41 +0000 Loic Dachary http://dachary.org/?p=2518 Continue reading ]]> Ceph pools are defined to collocate volumes and instances in OpenStack Havana. For volumes that do not need the resilience provided by Ceph, a LVM cinder backend is defined in /etc/cinder/cinder.conf: and appended to the list of existing backends: A cinder volume type is created and associated with it: # cinder type-create lvm | ID | Name | | c77552ff-e513-4851-a5e6-2c83d0acb998 | lvm | # cinder type-key lvm set volume_backend_name=LVM # cinder extra-specs-list | ID | Name | extra_specs | | c77552ff-e513-4851-a5e6-2c83d0acb998 | lvm | {u'volume_backend_name': u'LVM'} | To reduce the network overhead, a backend availability zone is defined for each bare metal by adding to /etc/cinder/cinder.conf: and restarting nova-volume: # restart nova-volume # sleep 5 # cinder-manage host list host zone bm0015.the.re@lvm bm0015 where bm0015 is the hostname of the machine. To create a LVM backed volume that is located on bm0015: cinder create --availability-zone bm0015 --volume-type lvm --display-name test 1 In order for the allocation of RBD volumes to keep working without specifying an availability zone, there must be at least one cinder volume running in the default availability zone ( nova presumably ) and configured with the expected RBD backends. This can be checked with: # cinder-manage host list | grep nova bm0017.the.re@rbd-cloudwatt nova bm0017.the.re@rbd-ovh nova bm0017.the.re@lvm nova bm0017.the.re@rbd-default nova bm0017.the.re@rbd-hetzner nova In the above the lvm volume type is also available in the nova availability zone and is used as a catch all when a LVM volume is prefered but collocating it on the same machine as the instance does not matter. http://dachary.org/?feed=rss2&p=2518 0 Creating a Ceph OSD from a designated disk partition http://dachary.org/?p=2548 http://dachary.org/?p=2548#comments Mon, 18 Nov 2013 15:34:31 +0000 Loic Dachary http://dachary.org/?p=2548 Continue reading ]]> When a new Ceph OSD is setup with ceph-disk on a designated disk partition ( say /dev/sdc3 ), it will not be prepared and the sgdisk command must be run manually: # osd_uuid=$(uuidgen) # partition_number=3 # ptype_tobe=89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be # sgdisk --change-name="${partition_number}:ceph data" \ --partition-guid="${partition_number}:{osd_uuid}" \ # sgdisk --info=3 /dev/sdc Partition GUID code: 89C57F98-2FE5-4DC0-89C1-F3AD0CEFF2BE (Unknown) Partition unique GUID: 22FD939D-C203-43A9-966A-04570B63FABB Partition name: 'ceph data' The ptype_tobe is a partition type known to Ceph and set when it is being worked on. Assuming /dev/sda is a SSD disk from which a journal partition can be created, the OSD can be prepared with: # ceph-disk prepare --osd-uuid "$osd_uuid" \ --fs-type xfs --cluster ceph -- \ /dev/sdc3 /dev/sda WARNING:ceph-disk:OSD will not be hot-swappable if ... Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. meta-data=/dev/sdc3 isize=2048 agcount=4, agsize=61083136 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=244332544, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=119303, version=2 The journal and data partitions should be associated with each other : # ceph-disk list /dev/sda : /dev/sda1 ceph journal, for /dev/sdc3 /dev/sdb : /dev/sdb2 other, ext4, mounted on / /dev/sdb3 swap, swap /dev/sdc : /dev/sdc1 other, primary /dev/sdc2 other, ext4, mounted on /mnt /dev/sdc3 ceph data, prepared, cluster ceph, journal /dev/sda1 The type of the partition can be changed so that udev triggered scripts notice it and provision the osd. # ptype=4fbd7e29-9d25-41b8-afd0-062c0ceff05d # sgdisk --typecode="${partition_number}:${ptype}" /dev/sdc # udevadm trigger --subsystem-match=block --action=add # df | grep /var/lib/ceph /dev/sdc3 932G 160M 931G 1% /var/lib/ceph/osd/ceph-9 http://dachary.org/?feed=rss2&p=2548 0 Display the default Ceph configuration http://dachary.org/?p=2515 http://dachary.org/?p=2515#comments Sat, 16 Nov 2013 11:21:48 +0000 Loic Dachary http://dachary.org/?p=2515 Continue reading ]]> The ceph-conf command line queries the /etc/ceph/ceph.conf file. # ceph-conf --lookup fsid The –show-config option can be used to display the config of a running daemon: ceph -n osd.123 --show-config When no name is specified, it will show the default Ceph configuration ceph --show-config --conf /dev/null http://dachary.org/?feed=rss2&p=2515 0 Migrating from ganeti to OpenStack via Ceph http://dachary.org/?p=2506 http://dachary.org/?p=2506#comments Wed, 13 Nov 2013 10:41:30 +0000 Loic Dachary http://dachary.org/?p=2506 Continue reading ]]> On ganeti, shutdown the instance and activate its disks: z2-8:~# gnt-instance shutdown nerrant Waiting for job 1089813 for nerrant... z2-8:~# gnt-instance activate-disks nerrant On an OpenStack Havana installation using a Ceph cinder backend, create a volume with the same size: # cinder create --volume-type ovh --display-name nerrant 10 | Property | Value | | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2013-11-12T13:00:39.614541 | | display_description | None | | display_name | nerrant | | id | 3ec2035e-ff76-43a9-bbb3-6c003c1c0e16 | | metadata | {} | | size | 10 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | ovh | # rbd --pool ovh info volume-3ec2035e-ff76-43a9-bbb3-6c003c1c0e16 rbd image 'volume-3ec2035e-ff76-43a9-bbb3-6c003c1c0e16': size 10240 MB in 2560 objects order 22 (4096 KB objects) block_name_prefix: rbd_data.90f0417089fa format: 2 features: layering On a host connected to the Ceph cluster and running a linux-kernel > 3.8 ( because of the format: 2 above ), map to a bloc device with: # rbd map --pool ovh volume-3ec2035e-ff76-43a9-bbb3-6c003c1c0e16 # rbd showmapped id pool image snap device 1 ovh volume-3ec2035e-ff76-43a9-bbb3-6c003c1c0e16 - /dev/rbd1 Copy the ganeti volume with: z2-8:~# pv < /dev/drbd10 | ssh bm0014 dd of=/dev/rbd1 2,29GB 0:09:14 [4,23MB/s] [==========================> ] 22% ETA 0:31:09 and unmap the device when it completes. rbd unmap /dev/rbd1 The volume is ready to boot. http://dachary.org/?feed=rss2&p=2506 0 Collocating Ceph volumes and instances in a multi-datacenter setup http://dachary.org/?p=2494 http://dachary.org/?p=2494#comments Tue, 12 Nov 2013 09:35:14 +0000 Loic Dachary http://dachary.org/?p=2494 Continue reading ]]> OpenStack Havana is installed on machines rented from OVH and Hetzner. An aggregate is created for machines hosted at OVH and another for machines hosted at Hetzner. A Ceph cluster is created with a pool using disks from OVH and another pool using disks from Hetzner. A cinder backend is created for each Ceph pool. From the dashboard, an instance can be created in the OVH availability zone using a Ceph volume provided by the matching OVH pool. Creating availability zones The availability zones are created as a side effect of creating an aggregate. # nova aggregate-create ovh ovh | Id | Name | Availability Zone | Hosts | Metadata | | 2 | ovh | ovh | [] | {u'availability_zone': u'ovh'} | # nova aggregate-create hetzner hetzner | Id | Name | Availability Zone | Hosts | Metadata | | 3 | hetzner | hetzner | [] | {u'availability_zone': u'hetzner'} | The hosts are assigned to their availability zone: # nova aggregate-add-host ovh bm0015.the.re Aggregate 2 has been successfully updated. | Id | Name | Availability Zone | Hosts | Metadata | | 2 | ovh | ovh | [u'bm0015.the.re'] | {u'availability_zone': u'ovh'} | The result can be checked with # nova availability-zone-list | Name | Status | | internal | available | | |- bm0015.the.re | | | | |- nova-conductor | enabled :-) 2013-11-11T14:26:43.000000 | | | |- nova-consoleauth | enabled :-) 2013-11-11T14:26:43.000000 | | | |- nova-scheduler | enabled :-) 2013-11-11T14:26:43.000000 | | | |- nova-cert | enabled :-) 2013-11-11T14:26:43.000000 | | ovh | available | | |- bm0015.the.re | | | | |- nova-compute | enabled :-) 2013-11-11T14:26:48.000000 | | hetzner | available | | |- bm0016.the.re | | | | |- nova-compute | enabled :-) 2013-11-11T14:26:49.000000 | | nova | available | Creating the Ceph pools The crush map is extracted with ceph osd getcrushmap -o crush.bin crushtool -d crush.bin -o crush.txt It is edited to add datacenter ovh { id -5 alg straw hash 0 item bm0014 weight 1.820 item bm0015 weight 1.820 rule ovh { ruleset 3 type replicated min_size 1 max_size 10 step take ovh step chooseleaf firstn 0 type host step emit and sent back to the Ceph monitors with crushtool -c crush.txt -o crush.bin ceph osd setcrushmap crush.bin An ovh pool is created and set to use the ovh ruleset: ceph osd pool create ovh 128 ceph osd pool set ovh crush_ruleset 3 The crush.txt file also contains the ruleset for the hetzner pool. Creating cinder backends In the /etc/cinder/cinder.conf file of the host running cinder-volume, one cinder backend is defined for each Ceph pool: In order to enable the –volume-type ovh option of cinder create, the corresponding type keys must be created: # cinder type-create ovh | ID | Name | | 48645332-4835-4a9b-9078-cd735f47dae5 | ovh | # cinder type-key ovh set volume_backend_name=RBD_OVH # cinder extra-specs-list | ID | Name | extra_specs | | 48645332-4835-4a9b-9078-cd735f47dae5 | ovh | {u'volume_backend_name': u'RBD_OVH'} | Check that the cinder scheduler is set as follows in /etc/cinder/cinder.conf Assembling instance and volume After creating a volume using the OVH cinder backend: cinder create --volume-type ovh --display-name test 1 An instance is created in the OVH availability zone: nova boot --availability-zone ovh \ --image 'cirros image' \ --key-name key_loic \ --nic net-id=e1d72366-1f25-42c1-a953-a944c9f932e3 \ --flavor m1.tiny --poll try The volume is attached to the instance nova volume-attach try 045d1cae-cd9b-4d64-b0b8-544f5b6d0c5a /dev/vdb http://dachary.org/?feed=rss2&p=2494 0 Fragmented floating IP pools and multiple AS hack http://dachary.org/?p=2466 http://dachary.org/?p=2466#comments Mon, 11 Nov 2013 12:39:53 +0000 Loic Dachary http://dachary.org/?p=2466 Continue reading ]]> When an OpenStack Havana cluster is deployed on hardware rented from OVH and Hetzner, IPv4 are rented by the month and are either isolated ( just one IP, not a proper subnet ) or made of a collection of disjoint subnets of various sizes. OpenStack does not provide a way to deal with this situation and a hack involving a double nat using a subnet of floating IP is proposed. A L3 agent runs on an OVH machine and pretends that is a subnet of floating IPs, although they are not publicly available. Another L3 agent is setup on a Hetzner machine and uses the subnet. When an instance is created, it may chose a Hetzner private subnet, which is connected to a Hetzner router for which the gateway has been set to a network providing the Hetzner floating IPs. And the same is done for OVH. A few floating IP are rented from OVH and Hetzner. On the host running the L3 agent dedicated to the OVH AS, a 1 to 1 nat is established between each IP in the subnet and the OVH floating IPs. For instance the following /etc/init/nat.conf upstart script associates with the floating IP. description "OVH nat hack" start on neutron-l3-agent ip addr add dev br-ex while read private public ; do test "$public" || continue iptables -t nat -A POSTROUTING -s $private/32 -j SNAT --to-source $public iptables -t nat -A PREROUTING -d $public/32 -j DNAT --to-destination $private done <<EOF end script Fragmented floating IP pools and routing Each floating IP ( also called failover IP ) provided by either OVH or Hetzner is uniquely associated to the MAC of the ethernet interface of a given hardware, using the proprietary web interface provided by OVH and Hetzner. The packets destined to the floating IP are routed to the interface even if the interface does not answer to arp who-has. The subnet to which a given floating IP belong is unknown and it is not possible to figure out if there is a gateway in the same subnet as a floating IP. If an instance is associated with such a floating IP, the outgoing packets are expected to be routed via the same gateway as the host. For instance on an OVH host: root@bm0015:~# ip addr show dev eth0 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 70:54:d2:1a:9d:76 brd ff:ff:ff:ff:ff:ff inet brd scope global eth0 root@bm0015:~# ip route default via dev eth0 metric 100 The IP address of the bm0015.the.re hardware ( it is not a floating IP ) is, is in a /24 subnet and is its router. The floating IP is routed to bm0015.the.re and although it is in a completely different subnet, it is expected to use the same gateway, that is AS segregation An external network is defined for OVH: neutron net-create ovh --router:external=True The L3 agent for OVH is configured to only handle this network so that multiple agents can be run. # neutron net-list --name ovh | id | name | subnets | | 033b4851-af21-478e-9a57-e624ff0b1340 | ovh | 9cb918b6-8737-4416-a5e0-e4bc5a5e6718 | # grep 033b4851 /etc/neutron/l3_agent.ini gateway_external_network_id = 033b4851-af21-478e-9a57-e624ff0b1340 It is also set to be the default for internal routers # grep internal /etc/neutron/l3_agent.ini handle_internal_only_routers = True The subnet that will act as if it was a public subnet is created within the ovh net: # neutron subnet-create --name ovh --disable-dhcp \ --allocation-pool=start=,end= \ Created a new subnet: | Field | Value | | allocation_pools | {"start": "", "end": ""} | | cidr | | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | | | host_routes | | | id | 9cb918b6-8737-4416-a5e0-e4bc5a5e6718 | | ip_version | 4 | | name | ovh | | network_id | 033b4851-af21-478e-9a57-e624ff0b1340 | | tenant_id | 2a2365c4031d47d890bb403db7e92583 | The –disable-dhcp prevents running a dnsmasq process that is not going to be used anyway. The allocation pool is not served by dnsmasq for floating IPs. A router is created and the ovh network is set to be its gateway, implicitly meaning the subnet is going to be used when allocating floating IPs. # neutron router-create ovh # neutron router-gateway-set ovh ovh Set gateway for router ovh A private subnet is created and connected to the router. All instances that intend to get a floating IP from the ovh pool must be connected to this subnet, otherwise there will be no route between them and the floating IP. # neutron net-create ovh-lan # neutron subnet-create --name ovh-lan ovh-lan # neutron router-interface-add ovh ovh-lan Double NAT hack The first nat is established by OpenStack between and Another nat is maintained outside of OpenStack to map the IPs from the subnet to actual public IPs. The map is maintained manually from an upstart script that runs after the neutron L3 agent. The instances that have no associated public IPs are masqueraded behind ( the gateway_ip picked by default when the ovh subnet was created above ). The line will masquerade them once more, using whatever public IP is the default for eth0. The full script is added in /etc/init/nat.conf and can be run manually with start nat. description "OVH nat hack" start on neutron-l3-agent ip addr add dev br-ex while read private public ; do test "$public" || continue done <<EOF end script After discussing with Edouard Thuleau, Chux Uzoeto, Mathieu Rohon, Pierre-Andre Morey, Christophe Sauthier, Erwan Gallen, Carl Perry, Sylvain Afchain, Emilien Macchi I’ve not been able to find a simpler or less ad-hoc way. It is not possible to attach eth0 to br-ex because OVH will raise alerts if it sees unexpected MAC addresses. It is possible to route a floating IP to br-ex. However it is not possible to subnet-create a single IP ( there needs to be at least one other IP used as a gateway and there is no way to specify a gateway that is no in the same subnet as the IP ). It is also not possible to update the allocation pools using neutron subnet-update because it is a read only attribute. Although it is possible to hack routes and IP directly in the net namespace of the router, the end result is more contorted than relying on a double nat. http://dachary.org/?feed=rss2&p=2466 3 HOWTO OpenStack Grizzly and Ceph with Puppet on Ubuntu 12.04 http://dachary.org/?p=2393 http://dachary.org/?p=2393#comments Mon, 28 Oct 2013 21:11:42 +0000 Loic Dachary http://dachary.org/?p=2393 Continue reading ]]> For months I’ve asked people working with puppet modules on a daily basis for a HOWTO that I could follow to setup a new cluster with the Grizzly OpenStack release. Such a HOWTO is not needed for people who develop the modules or deploy OpenStack for a living. It is however very helpful for the casual system administrator willing to get it running in a few hours, all by herself/himself. The packstack seems to be exactly that : a walkthru of a well tested procedure that anyone with a basic understanding of what OpenStack is can rely on. It requires an RPM based distribution and this may be a significant effort for someone used to DEB based operating systems. For Ubuntu users, the kickstack project was started in summer 2013 and targets hands on sessions, with the declared goal to make it easy for people new to both OpenStack and puppet. Later on, it inspired Dan Bode to use a new approach based on dependency injection to implement openstack-installer for Cisco. The proposed HOWTO uses openstack-installer to deploy OpenStack against an existing Ceph cluster and provides: • keystone • nova ( kvm ) • quantum ( openvswitch + gre ) • cinder ( Ceph backend ) • horizon • glance ( Ceph backend ) Running the HOWTO The HOWTO was written October 17th, 2013. It was tried and fixed a few times to get it right. It is meant to be followed sequentially to implement the proposed use case. It does not describe any variation, nor does it explain how to debug a problem, should there be one. The recommended procedure is to start again from scratch. Developping the HOWTO openstack-installer can be tested on a single machine to get a baseline of how things are expected to work. It is helpful when stumbling into a problem on bare metal : is it because a mistake was done ? Or is it because a bug was found in openstack-installer ? Once the test environment is available, the bare metal machines can be installed with Ubuntu precise. The kernel must be 3.2 and not be updated to 3.8 because the openvswitch kernel module will fail to compile. Each machine must have two network interfaces. If they are not connected on a LAN, virtual interfaces can be created using the l2mesh puppet module. Regarding Ceph, the simplest option is to create a test cluster. An alternative is to follow the quick start ceph-deploy guide. In both cases cephx is disabled to simplify the deployment. It will eventually be easier to use a puppet module but the development is currently fragmented and not trivial to re-use. Using puppet scenario compile_all role_name helps figure out what parameters are set by scenario_node_terminus. As of today the master branch of openstack-installer targets Grizzly but this is likely to change when Havana is released. The changes that were made while preparing this post are listed here for the record but are now upstream: http://dachary.org/?feed=rss2&p=2393 1 gerritexec: continuous integration one-liner http://dachary.org/?p=2448 http://dachary.org/?p=2448#comments Sun, 27 Oct 2013 14:38:03 +0000 Loic Dachary http://dachary.org/?p=2448 Continue reading ]]> gerritexec is a command line tool listening to gerrit on a designated project. On each new patchset, it will: • git clone the project • git pull the patchset • cd in the git tree and run a script • negatively review the patchset ( -1 ) otherwise GEM_HOME=~/.gems gerritexec --hostname review.openstack.org \ --username puppetceph \ --project stackforge/puppet-ceph Larger projects should consider using zuul or a gerrit jenkins plugin. Using a virgin Ubuntu 12.04.3, it can be installed as follows: sudo apt-get install python-pip git ruby-bundler libxml2-dev libxslt-dev sudo pip install gerritexec Assuming the ssh private key for the puppetceph user is found in ~/.ssh/id_rsa and that it is a service user able to review the stackforge/puppet-ceph project, gerritexec can be run as follows: gerritexec --hostname review.openstack.org \ --verbose \ --username puppetceph \ --script 'bundle exec rake spec:system' \ --project stackforge/puppet-ceph It is better run from a screen to capture the output generated by –verbose and help with debugging should a problem occur. http://dachary.org/?feed=rss2&p=2448 0
22
15
Tell me more × I have a Debian Squeeze server with Apache2 and Subversion on-board. The Subversion version is 1.6.12 (r955767). It is from debian repos. But recently I installed Subversion v1.7.7 from sources into /usr/local/ and now in console I see root@test:~# svn --version svn, version 1.7.7 (r1393599) compiled Dec 6 2012, 17:28:19 root@test:~# svnadmin --version svnadmin, version 1.7.7 (r1393599) compiled Dec 6 2012, 17:28:19 root@test:~# svnserve --version svnserve, version 1.7.7 (r1393599) compiled Dec 6 2012, 17:28:19 But when I access this server through Apache I'm getting the following server signature Powered by Subversion version 1.6.12 (r955767). Apache virtual host config is <VirtualHost XXX.XXX.XXX.XXX:80> ServerAdmin aboritskiy@XXXXXXXXX.XX ServerName svn.XXXXXXXXX.XX HostnameLookups Off UseCanonicalName Off ServerSignature On <IfModule mod_userdir.c> UserDir public_html Include /etc/apache2/mod_userdir.conf <Directory "/var/svn/"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all <Location /> DAV svn SVNParentPath /var/svn/repos AuthType Basic AuthName "Advance Digital Subversion Repository." AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user So the question is: How does Apache choose the version of Subversion to work with? How to change this setting? share|improve this question add comment 2 Answers You also need to upgrade the Apache webDav / SVN module (libapache2-svn) so you'll either need to grab the Apache source as well and compile an updated version of Apache and the lib, else apt-get yourself the latest Apache, SVN and libapache2-svn packages. share|improve this answer add comment If you look in /etc/apache2/mods-enabled/dav_svn.load, you'll see LoadModule dav_svn_module /usr/lib/apache2/modules/ LoadModule authz_svn_module /usr/lib/apache2/modules/ You can confirm that those point to Debian's Subversion 1.6.12 module: $ strings /usr/lib/apache2/modules/ | grep 'Powered by' <hr noshade><em>Powered by <a href="">Subversion</a> version 1.6.12 (r955767).</em> You should modify /etc/apache2/mods-available/dav_svn.load to point to your self-built Subversion modules for Apache. share|improve this answer add comment Your Answer
14
9
Tell me more × I need to make a program in which the user inputs a word and I need to do something to each individual letter in that word. They cannot enter it one letter at a time just one word. I.E. someone enters "test" how can I make my program know that it is a four letter word and how to break it up, like make my program make four variables each variable set to a different letter. It should also be able to work with bigger and smaller words. Could I use a for statement? Something like For letter ste that letter to a variable, but what is it was like a 20 character letter how would the program get all the variable names and such? share|improve this question Can you explain the exact use case where you want to do this? I mean, why would you like to have 20 variables for 20 length string at all? –  Rohit Jain Feb 2 at 22:32 It sounds like you are looking for a data structure like a list, but it's worth noting a string is also an iterable, so it's probably not needed. –  Lattyware Feb 2 at 22:34 a WORD is a string/list of Characters. You can parse a word like parsing a list. What is the problem ?? –  Vahid Rafiei Feb 2 at 22:37 It sounds like you might benefit from some reading on basic data structure types in programming to give you an idea of how data can be stored and used in nice ways. Khan Academy has some video lectures on Python like which might help (disclaimer: I've only flicked through the video - it may not be as good as I think). –  m.brindley Feb 2 at 23:22 add comment 5 Answers Do you mean something like this? >>> s = 'four' >>> l = list(s) >>> l ['f', 'o', 'u', 'r'] Even though that's (apparently) what you think you wanted, it's probably not necessary because it's possible for a string to hold virtually any size of a word -- so a single string variable likesabove should be good enough for your program verses trying to create a bunch of separately named variables for each character. For one thing, it would be difficult to write the rest of the program because you wouldn't to know what valid variable names to use. The reason it's OK not to have separate variable for each character is because a single string can have any number of characters in it as well as be empty. Python's built-inlen()function will return a count of the number of letters in a string if applied to one, so the result oflen(s)in the above would be4. Any character in a string can be randomly accessed by indexing it with an integer between0andlen(s)-1inside of square brackets, so to reference the third character you would uses[2]. It's useful to think of the index as the offset or the character from the beginning of the string. Even so, in Python using indexing is often not needed because you can also iteratively process each character in a string in aforloop without using them as shown in this simple example: num_vowels = 0 for ch in s: if ch in 'aeiou': num_vowels += 1 print 'there are', num_vowels, 'vowel(s) in the string', s Python also has many other facilities and built-ins that further help when processing strings (and in fact could simplify the above example), which you'll eventually learn as you become more familiar with the language and its many libraries. share|improve this answer Yes that's perfect. –  user1836262 Feb 2 at 22:38 add comment When you iterate a string, it returns the individual characters like for c in thestring: You can use this to put the letters into a list if you really need to, which will retain its order but list(string) is a better choice for that (be aware that unordered types like dict or set do not guarantee any order). share|improve this answer add comment You don't have to do any of those; In Python, you can access characters of a string using square brackets: >>> word = "word" >>> print(word[0]) >>> print(word[3]) >>> print(len(word)) share|improve this answer add comment You don't want to assign each letter to a separate variable. Then you'd be writing the rest of your program without even being able to know how many variables you have defined! That's an even worse problem than dealing with the whole string at once. What you instead want to do is have just one variable holding the string, but you can refer to individual characters in it with indexing. Say the string is in s, then s[0] is the first character, s[1] is the second character, etc. And you can find out how far up the numbers go by checking len(s) - 1 (because the indexes start at 0, a length 1 string has maximum index 0, a length 2 string has maximum index 1, etc). That's much more manageable than figuring out how to generate len(s) variable names, assign them all to a piece of the string, and then know which variables you need to reference. Strings are immutable though, so you can't assign to s[1] to change the 2nd character. If you need to do that you can instead create a list with e.g. l = list(s). Then l[1] is the second character, and you can assign l[1] = something to change the element in the list. Then when you're done you can get a new string out with s_new = ''.join(l) (join builds a string by joining together a sequence of strings passed as its argument, using the string it was invoked on to the left as a separator between each of the elements in the sequence; in this case we're joining a list of single-character strings using the empty string as a separator, so we just get all the single-character strings joined into a single string). share|improve this answer add comment x = 'test' counter = 0 while counter < len(x): print x[counter] # you can change this to do whatever you want to with x[counter] counter += 1 share|improve this answer There is no real reason to use a while i < len(x) here - it just introduces more complicated code instead of using the tools Python provides to do this in a nicer way. –  m.brindley Feb 2 at 23:16 add comment Your Answer
14
6
Tell me more × i am just starting with python (python3) because i read its good for the euler project since it can handle very big numbers. now i am struggling with a quite simple problem of converting float to int. Why don't i get the same result for this: num = 6008514751432349174082765599289028910605977570 print('num {0} '.format(int(num))) num = num / 2 print('num /2 {0} '.format(int(num))) num = num * 2 print('num *2 {0} '.format(int(num))) output for this is: num 6008514751432349174082765599289028910605977570 num /2 3004257375716174771611310192874715313222975488 num *2 6008514751432349543222620385749430626445950976 share|improve this question add comment 1 Answer up vote 6 down vote accepted You are using float division, which cannot handle large numbers with as much precision, after which you are flooring the result by casting it back to an int(). Don't do that, that causes data loss. Use integer (floor) division with // instead: >>> 6008514751432349174082765599289028910605977570 // 2 * 2 This still can lead to rounding errors of course, if the input value is not divisible by 2 without flooring: >>> 6008514751432349174082765599289028910605977571 // 2 * 2 but floating point values are limited in precision based on your exact CPU support; see sys.float_info to see what exact limitations your platform imposes on float numbers. On my Mac, sys.float_info.dig tells me my platform supports 15 digits of precision, but you are dividing a 46-digit integer number. This means that you throw away the bottom 30 digits from your large integer when using float division: >>> len(str(int(6008514751432349174082765599289028910605977570 / 2) - (6008514751432349174082765599289028910605977570 // 2))) That is a lot of precision loss there. :-) share|improve this answer ah, perfect. thanks –  santa May 2 at 15:37 @MartijnPieters when does someone use // instead of /? –  dustin May 2 at 15:44 @dustin: when you want to apply integer division (so the result being an int) instead of a floating point value. –  Martijn Pieters May 2 at 15:46 @dustin: When you are working with integer values with more digits than the number of digits your platform can support (see sys.float_info.dig) then you'll see precision loss when you use float division. –  Martijn Pieters May 2 at 15:53 add comment Your Answer
9
6
Running Plone and Zope behind an Apache 2 web server « Return to page index How to set up an Apache 2 web server as proxy with disk caching and deflating. Preface and Apache 2 module configuration This tutorial describes how to set up an Apache 2 webserver as proxy with disk caching and deflating (compressing like mod_gzip) for Zope under Debian Testing. It may or may not be working with other distributions. Please send me feedback. • Apache 2 installed and running • The following Apache 2 modules installed (they should be shipped with Apache 2) • mod_cache • mod_deflate • mod_disk_cache • mod_headers • mod_mime_magic • mod_proxy • mod_proxy_http • mod_rewrite • Zope installed and running This howto is about Zope 2.7 under Python 2.3.3 but it should work with every other zope2 version. For Zope 2.7 you need • Zope 2.7 as source tar.gz or cvs checkout (Zope-2_7-branch) from • Python 2.3.3 with unicode enabled (python2.3) • python2.3-xml (PyXML) • python2.3-dev (headers, distutils) • python2.3-psyco (python code optimizer) • some additional packages like python2.3-docutils (reST), python2-3-imaging (PIL) We have an Apache 2 server listening on both http and https requests on all interfaces. The site is a zope with as an alias. Every request to a manage url is rewritten to to secure management access. The zope http server is running on port 10080 at localhost and the site is stored in /example_org/. Apache 2 directory layout Debian is using the following directory structur for Apache 2 base directory for all configuration files main configuration file. This file loads the other configurations from the directories mentioned below. configuration file for Listen $Port directory for additional configuration options available sites enabled sites, may contain softlinks to files in /etc/apache2/site-available. Only this sites are loaded available modules (*.load) and module configurations (*.conf) enabled modules, may contain softlinks to files in /etc/apache2/mods-available. Only this modules are loaded. You must also link the conf file if it exists. directory containing ssl cert files. I suggest creating three directories crl, crt and key in this directory. To enable a site or a module symlink it from the -available to the -enabled directory: user@myhost:/etc/apache2/sites-enabled$ ln -s ../sites-available/default Loading the Apache 2 modules Debian users should be save to use the default files. LoadModule deflate_module /usr/lib/apache2/modules/ LoadModule headers_module /usr/lib/apache2/modules/ LoadModule mime_magic_module /usr/lib/apache2/modules/ <IfModule mod_mime_magic.c> MIMEMagicFile /etc/apache2/magic LoadModule cache_module /usr/lib/apache2/modules/ LoadModule disk_cache_module /usr/lib/apache2/modules/ LoadModule proxy_module /usr/lib/apache2/modules/ LoadModule proxy_http_module /usr/lib/apache2/modules/ Don't symlink it! We will use our own configuration file. LoadModule rewrite_module /usr/lib/apache2/modules/ Don't symlink it until you have a valid configuration and the all necessary ssl keys: LoadModule ssl_module /usr/lib/apache2/modules/ Don't symlink it! We will use our own configuration file. After you check or created the files symlink every file mods-enabled. Keep in mind that the module load order in the modules.load files are very important. If there are already some files in the mods-enabled directory make shure no module is loaded twice! Custom configurations Create the following files in /etc/apache2/conf.d. This files contains our own configurations so we won't bust the default configurations from debian. <IfModule mod_deflate.c> DeflateCompressionLevel 3 DeflateFilterNote Input instream DeflateFilterNote Output outstream DeflateFilterNote Ratio ratio # Netscape 4.x has some problems... BrowserMatch ^Mozilla/4 gzip-only-text/html # Netscape 4.06-4.08 have some more problems # MSIE masquerades as Netscape, but it is fine # the above regex won't work. You can use the following # workaround to get the desired effect: # Don't compress images, java scripts and style sheets SetEnvIfNoCase Request_URI \ \.(?:gif|jpe?g|png|js|css)$ no-gzip dont-vary # Make sure proxies don't deliver the wrong content # this needs mod_headers but it's very important # so I don't add a IfModule around it Header append Vary User-Agent env=!dont-vary # we will add some configuration options later proxy.conf (you can copy the file from mods-available and alter it): <IfModule mod_proxy.c> #turning ProxyRequests on and allowing proxying from all may allow #spammers to use your proxy to send email. ProxyRequests Off #<Proxy *> # Order deny,allow # Deny from all # #Allow from # allow to connect to localhost with port ending with 80 and 90 (www, webdav) # the having at least 2 digets before the 80 or 90 <ProxyMatch http://localhost:[0-9]{2,}?[8|9]0/.*> Order deny,allow Allow from all ProxyVia On # (no cacheing without CacheRoot) CacheRoot "/var/cache/apache2/proxy" # 300MB CacheSize 307200 # in hours CacheGcInterval 4 CacheMaxExpire 24 CacheLastModifiedFactor 0.1 CacheDefaultExpire 1 CacheForceCompletion 100 # Again, you probably should change this. <IfModule mod_ssl.c> SSLEngine on # path to a directory containing the ssl ca keyring and revocation list # you must create hash symlinks using the right Makefile! SSLCACertificatePath /etc/apache2/crt/ SSLCARevocationPath /etc/apache2/crl/ SSLSessionCache shm:/var/log/apache2/ssl_scache(128000) SSLMutex sem SSLRandomSeed startup file:/dev/urandom 512 SSLRandomSeed connect file:/dev/urandom 512 If you think everything is ok, restart apache2: $ /etc/init.d/apache2 restart Preparing virtual hosting Virtual hosting means serving more than one domain from one ip address. The Apache 2 webservers knows what domain the browser wants by using the domain name that is send by the browser. Therefor it isn't possible to use virtual hosting for secure http (https, http over ssl) because the ssl handshake must be done before negotiationing the domain name. It's a shame that browsers and webservers aren't TLS aware. Check the file /etc/apache2/ports.conf and see if Apache 2 is listening on the default port for http: Listen 80 If you want to use SSL, you need to listen on the default port for https, too: Listen 80 <IfModule mod_ssl.c> Listen 443 If you have multiple network devies and/or ip adresses you can bind Apache to a single address: Next you need to configure Apache 2 to use so called NameVirtualHost for virtual hosting. This is the easiest setup because you just need to provide the server name and the address/port in each virtual domain configuration section. Change the file /etc/apache2/conf.d/namevirtualhost.conf and add this line: NameVirtualHost *:80 <IfModule mod_ssl.c> NameVirtualHost *:443 The entries must look like the entries in ports.conf but with a leading *: if Apache 2 is listening on every address. Restart Apache 2 and see if you can browse to your server. Maybe Apache 2 is complaining that it cannot find any virtual hosts matching the NameVirtualHost configuration but that's no problem. We'll fix that later. Zope configuration Next up is making sure your Zope server is configured correctly. Configuring the Zope server You need to configure the Zope server next. At least you should change the following options in your etc/zope.conf: debug-mode on Debugging is enabled by default. Leave it enabled until your Zope server works and then disabled it in production mode. CMF and Plone will run much faster in production mode. effective-user zope Define an existing effective user if you want to start Zope from the init process or as root. locale de_DE@euro Enables locales in Zope and sets it to de_De@euro (ISO-8859-15). You should set this var to your system default LC. On debian use dpkg-reconfigure -plow locales to see a list of locales and to compile some. datetime-format international This is a good idea until you don't live in the usa. #ip-address unset If you don't set one Zope will bind to all interfaces except if you define one in a server section. port-base 10000 Port offset (see below) address (in <http-server>) Bind the http server to the loopback interface (localhost or on port 10080 (port-base 10000 + 80). Nobody is able to connect to your Zope server directly. cache-size 5000 (in <zodb_db main>) Increases the cache size of your ZODB to 5000 objects. The cache should be as large as possible to increase the speed of Zope, but take care not to let it eat up all your RAM. If it's too large and your system needs to use the swap space on your hard drive, your Zope will become very slow! If you are running in debug mode you should use $INSTANCE-HOME/bin/runzope to start Zope. You are able to read all debug information in your console and you can easily stop Zope by pressing CTRL+C. Later you should disable the debug mode and run Zope with $INSTANCE-HOME/bin/zopectl in daemon mode. zopectl is a cool tool and allows you a very easy integration of Zope in your boot process: root@host:/$ cd /etc/init.d root@host:/etc/init.d$ ln -s /path/to/your/zope/instance/bin/zopectl myzope root@host:/etc/init.d$ /etc/init.d/myzope start Configuring the Zope instance Browse to the ZMI (Zope management interface) of your Zope server directly (without apache as frontend) http://localhost:10080/manage. If you don't have a browser on your server just bind the http server of Zope temporarily to all interfaces by simple removing and restarting Zope. If you have a linux server with lynx or links installed you can use this little trick to avoid problems with frames and the pull down add menu: host:/$ lynx http://localhost:10080/manage_addProduct/SiteAccess/manage_addVirtualHostMonsterForm Add a Virtual Host Monster with the id VirtualHostMonster to the root of your Zope instance (make sure it's not in the Plone instance, it should be one level above that). You could chose any id you like but it needs to be unique for your whole site so I think this is a good idea. :) Don't add more than one VHM to your Zope instance! One is enough for every subpage. For this example you need to add a folderish type (e.g. a plone site to the root) with the id example_org to the root of your Zope instance. visit at Note: If you are using a buildout based Plone installation, zope.cfg is recreated everytime you run bin/buildout. Some of the changes at least can be made via buildout.cfg. In particular binding Zope only to the local host address is achieved by changing http-address in the approriate client section. E.g. for client1: <= client_base recipe = plone.recipe.zope2instance zeo-address = ${zeoserver:zeo-address} http-address = Apache 2 virtual host Then it's time to set up the Apache VirtualHost. A very easy example config file: <VirtualHost *:80> ServerSignature On CustomLog /var/log/apache2/ combined ErrorLog /var/log/apache2/ LogLevel warn <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^/(.*) \ http://localhost:10080/VirtualHostBase/http/%{SERVER_NAME}:80/example_org/VirtualHostRoot/$1 [L,P] A virtual host serving zope <VirtualHost *:80> ServerSignature On # we don't need a DocumentRoot for a zope only sites #DocumentRoot /var/www/ CustomLog /var/log/apache2/ combined ErrorLog /var/log/apache2/ LogLevel warn # log the deflate compression rate to a file #CustomLog /var/log/apache2/deflate_log deflate <IfModule mod_rewrite.c> RewriteEngine On # use RewriteLog to debug problems with your rewrite rules # disable it after you found the error our your harddisk will be filled *very fast* # RewriteLog "/var/log/apache2/rewrite_log" # RewriteLogLevel 2 # serving icons from apache 2 server RewriteRule ^/icons/ - [L] # rewrite any access to manage to a secure server RewriteRule ^/(.*)/manage(.*) \$1/manage$2 [NC,R=301,L] RewriteRule ^/manage(.*) \$1 [NC,R=301,L] # rewrite any other access to the zope server using a proxy [P] and add the VMH magic keywords # use %{SERVER_NAME} instead of to avoid busting the ServerAlias # %{HTTP_HOST} is bad because it may contain the port RewriteRule ^/(.*) \ <IfModule mod_proxy.c> ProxyVia On # prevent the webserver from beeing used as proxy <LocationMatch "^[^/]"> Deny from all # caching (disabled) # this caches every file with the correct caching informations starting at / <IfModule mod_disk_cache.c> #CacheEnable disk / # compression (disabled) <IfModule mod_deflate.c> #SetOutputFilter DEFLATE Additional rewrite rules Rewrite rules used for serving the secure manage access. HTTP host redirecting every access to the https server: <VirtualHost *:80> ServerSignature On # we don't need a DocumentRoot for zope only sites #DocumentRoot /var/www/ CustomLog /var/log/apache2/ combined ErrorLog /var/log/apache2/ LogLevel warn <IfModule mod_rewrite.c> RewriteEngine On # use RewriteLog to debug problems with your rewrite rules # RewriteLog "/var/log/apache2/rewrite_log" # RewriteLogLevel 2 # Rewrite with redirect moved permanently SSL Host serving all manage access to zope: <IfModule mod_ssl.c> <VirtualHost *:443> ServerSignature On DocumentRoot /var/www/ CustomLog /var/log/apache2/ combined ErrorLog /var/log/apache2/ LogLevel warn SSLEngine On SSLCertificateFile /etc/apache2/ssl/crt/ SSLCertificateKeyFile /etc/apache2/ssl/key/ <Location /> # Force usage of ssl encryption # SSL client certs: none, optional, require # Note: optional doesn't work with all browsers SSLVerifyClient optional SSLVerifyDepth 1 SSLOptions +StdEnvVars +StrictRequire #optional +ExportCertData <IfModule mod_rewrite.c> RewriteEngine On # use RewriteLog to debug problems with your rewrite rules # RewriteLog "/var/log/apache2/rewrite_log" # RewriteLogLevel 2 # The following rules will rewrite any access to # to the root of the zope instance running at localhost:10080 RewriteRule ^/zope/main_instance$ \ http://localhost:10080/VirtualHostBase/https/ [L,P] RewriteRule ^/zope/main_instance/(.*) \ http://localhost:10080/VirtualHostBase/https/$1 [L,P] <IfModule mod_proxy.c> ProxyVia On # prevent the webserver from beeing used as proxy <LocationMatch "^[^/]"> Deny from all # don't try to cache ssl! # compression (disabled) <IfModule mod_deflate.c> #SetOutputFilter DEFLATE Download example configuration How VHM works The Virtual Host Monster adds some magic to the traversal process of Zope. Two special keywords are added (VirtualHostBase and VirtualHostRoot) which allows you to configure the virtual host and the base folder inside your Zope instance. Virtual hosting with Zope The VHM part of an ordinary rewrite rules looks like this: ^/(.*) \ The address has seven parts: This is only for apache's mod_proxy module. It configures what server should be accessed including protocol, host and port. In this example mod_proxy is accessing the ZServer at port 100080 on the same host using http. This is the magic keyword to start virtual hosting. You must not add an object called VirtualHostBase to your zope root! The first path segment after VirtualHostBase defines the protocol of the vhost url. The second segment after VirtualHostBase defines the server and the port. Together with the protocol it's the base part of the url, in this example Like VirtualHostBase the protocol and server are no real objects. They are just put into the url for configuration purpose and they are stripped of the url after configuring the virtual host for a request. Now the real traversal through Zope starts. After setting up the protocol and server part of the new url we are traversing through Zope to the new virtual root for the vhost. You can add zero or more objects here. Finally the magic keyword that we have reached the new virtual root for the vhost. Everything after VirtualHostRoot is visible to the browser. $1 and ^/(.*) $1 and ^/(.*) are some regex foo. ^/(.*) means "Match everything starting with a / and save every char after the / in the var $1. Special case _vh_foo Imagen you want to have as the root url of your virtual url. You can get the effect by using the special _vh_ declaration. Any path segment starting with _vh_ is stripped of the url for traversal through zope and readded without _vh_ after traversal. Example: ^/foo/(.*) \ Note You are neither allowed to create an object called VirtualHostBase or VirtualHostRoot in your zope nor should you add an object with the same id of your VHM. It may work but it may also break your site. FastCGI for Apache2 How to compile mod_fastcgi for Apache 2 under Debian. Install the prerequisite packages to compile the module. Installing apache2-dev should do the job: apt-get install apache2-dev I had some problems with a missing libtool script so I had to link it: cd /usr/share/apache2/build ln -s /usr/lib/libtool . Download the fastcgi extension from and unpack the tar.gz: tar -xzf mod_fastcgi-X.X.X.tar.gz cd mod_fastcgi-X.X.X Note: Due a bug in fastcgi you need at least snapshot from mid April 2004 or the upcoming 2.4.3 release. Copy Makefile.AP2 to Makefile: cp Makefile.AP2 Makefile Modify Makefile for Apache 2 to reflect the system configuration: # Makefile for Apache2 builddir = . # XXX change this line top_dir = /usr/share/apache2 top_srcdir = ${top_dir} top_builddir = ${top_dir} include ${top_builddir}/build/ APXS = apxs APACHECTL = apachectl #LIBS=-Lmy/lib/dir -lmylib # XXX add this line all: local-shared-build install: install-modules Run make and install the modules: make install Now you should have a module called in /usr/lib/apache2/modules. mod_proxy vs. mod_ssl vars How to access the mod_ssl vars. mod_proxy vs. mod_ssl vars You can't access the special environment vars added by mod_ssl (SSLOptions +StdEnvVars) inside of Zope if you are using mod_proxy to access Zope. This is due the way how proxying works internally. Every transparent proxy access to Zope is a new request and has no SSL context. In order to see the special environment vars you have to setup a CGI access to Zope. The best and fasted method is FastCGI To do this, first you have to install FastCGI for Apache 2, as explained in the previous part. After mod_fastcgi is compiled, installed and loaded you have to reconfigure both Zope and Apache2. Next you must enable the fast-cgi server of Zope. You can choose between socket and tcp (host:port) where socket is a little bit faster but Apache2 must have read access to the directory where the socket lifes. In this example INSTANCE_HOME is /var/lib/zope/example and the fastcgi address is $INSTANCE/var/zope.soc. Final step is to reconfigure Apache2. The following example conf has only the necessary parts for fastcgi. Note that the DocumentRoot must exists and must be accessible by Apache2 but the zope.fcgi file must not exist. Also you should remove all proxy RewriteRules. Apache2 config: <IfModule mod_fastcgi.c> FastCGIExternalServer /var/www/ \ -socket /var/lib/zope/example/var/zope.soc \ -pass-header Authorization \ -pass-header Cookie \ -idle-timeout 60 \ -appConnTimeout 0 <VirtualHost ...> DocumentRoot /var/www/ <IfModule mod_fastcgi.c> <Directory /var/www/> AddHandler fastcgi-script .fcgi <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^/(.*) \ /zope.fcgi/VirtualHostBase/https/$1 [L] Frequently asked questions Nobody has asked me a question :) Sample configuration The example configuration archive contains the debian default configuration files (untouched), all configurations mentioned in this howto and the two make files to create the hash links for revocation path and certification path (taken from apache 1). |-- apache2.conf |-- conf.d | |-- deflate.conf | |-- namedvirtualhost.conf | |-- proxy.conf | `-- ssl.conf |-- httpd.conf |-- magic |-- mods-available | |-- actions.load | |-- asis.load | |-- auth_anon.load | |-- auth_dbm.load | |-- auth_digest.load | |-- auth_ldap.load | |-- cache.load | |-- cern_meta.load | |-- cgi.load | |-- cgid.conf | |-- cgid.load | |-- dav.conf | |-- dav.load | |-- dav_fs.conf | |-- dav_fs.load | |-- deflate.load | |-- disk_cache.load | |-- expires.load | |-- ext-filter.load | |-- ext_filter.load | |-- fastcgi.load | |-- file_cache.load | |-- headers.load | |-- imap.load | |-- include.load | |-- info.load | |-- ldap.load | |-- mem_cache.load | |-- mime_magic.conf | |-- mime_magic.load | |-- proxy.conf | |-- proxy.load | |-- proxy_connect.load | |-- proxy_ftp.load | |-- proxy_http.load | |-- rewrite.load | |-- speling.load | |-- ssl.conf | |-- ssl.load | |-- suexec.load | |-- unique_id.load | |-- userdir.conf | |-- usertrack.load | `-- vhost_alias.load |-- mods-enabled | |-- cgi.load -> ../mods-available/cgi.load | |-- deflate.load -> ../mods-available/deflate.load | |-- headers.load -> ../mods-available/headers.load | |-- mime_magic.conf -> ../mods-available/mime_magic.conf | |-- mime_magic.load -> ../mods-available/mime_magic.load | |-- proxy.load -> ../mods-available/proxy.load | |-- rewrite.load -> ../mods-available/rewrite.load | `-- ssl.load -> ../mods-available/ssl.load |-- ports.conf |-- sites-available | |-- default | |-- | |-- | `-- |-- sites-enabled | |-- -> ../sites-available/ | `-- -> ../sites-available/ `-- ssl |-- crl | |-- Makefile | `-- README |-- crt | |-- Makefile | |-- README | `-- ca-bundle.crt `-- key `-- README Download example configuration
20
12
Take the 2-minute tour × i have a python module with a function: def do_stuff(param1 = 'a'): if type(param1) == int: # enter python interpreter here is there a way to drop into the command line interpreter where i have the comment? so that if i run the following in python: >>> import my_module >>> do_stuff(1) i get my next prompt in the scope and context of where i have the comment in do_stuff()? share|improve this question add comment 2 Answers up vote 19 down vote accepted import pdb; pdb.set_trace() will enter the python debugger at that point See here: http://docs.python.org/library/pdb.html share|improve this answer add comment If you want a standard interactive prompt (instead of the debugger, as shown by prestomation), you can do this: import code See: the code module. If you have IPython installed, and want an IPython shell instead, you can do this: from IPython.Shell import IPShellEmbed ipshell = IPShellEmbed() share|improve this answer for IPython>=0.11, there's no more module Shell in IPython...so start it using "import IPython; IPython.embed()" instead. –  evandrix Sep 12 '11 at 10:06 add comment Your Answer
5
4
End of preview. Expand in Data Studio

Terminal/CLI Web Text

webterminal

A filtered extract of terminal and command-line content from two large web-text corpora, designed for upsampling agentic-adjacent data during pretraining.

Subsets

Subset Rows Tokens Size Quality
clean (default) 2.33M 4.6B 11 GB ~98% terminal content
unfiltered 61.3M 359B 962 GB ~15% terminal content
from datasets import load_dataset

# Load the clean subset (default)
ds = load_dataset("AdaMLLab/WebTerminal")

# Load the unfiltered subset
ds = load_dataset("AdaMLLab/WebTerminal", "unfiltered")

Sources

  • DCLM (Zyphra/dclm-dedup)
  • FineWeb (Salesforce/fineweb_deduplicated)

How it was built

v0.1 — Unfiltered

  1. Fast filter: skip any document that doesn't contain obvious CLI indicators ($, sudo, pip install, ```bash, root@, etc.)
  2. Score: remaining docs are scored (0-34) across five signals, each with a per-match point value and a cap:
Filter Description Points Cap
Prompt patterns Shell prompts like $ cmd, user@host:~$, >>>, root@, PS C:\ 2 per match 10
CLI commands Known commands: sudo, apt-get, pip install, git clone, docker run, curl, ssh, gcc, etc. (30+ patterns) 1 per unique match 8
stdout patterns Output indicators: "successfully installed", "cloning into", drwx (ls output), "packets transmitted", "traceback", version strings 2 per match 6
Code blocks Terminal-flavored code blocks: ```bash, ```shell, <pre><code>, terminal/console div classes 2 per match 6
Indented blocks 3+ consecutive lines indented 4+ spaces (code/output blocks) 1 per match 4

Documents scoring >=5 are kept.

  1. Dedup: exact dedup across both datasets using xxhash64 on full text. Removed 1,168 duplicates.

v0.2 — Clean

The unfiltered subset is ~84-86% noise at lower score levels (5-12), which make up 93% of the data. The root cause: v0.1's scoring uses context-blind keyword matching — CLI command names like find, make, cat appear in normal English prose, bare $ matches currency amounts, and indented Python/SQL code gets scored as terminal content.

v0.2 applies a three-stage structural filter over the unfiltered data:

  1. Context-aware gate: instead of matching bare $, requires $ sudo, $ git, $ docker, etc. (dollar sign + space + known command). Eliminates ~87% of documents immediately.
  2. Validation regex: confirms a genuine structural terminal pattern exists — shell prompts followed by real commands, user@host:~$ patterns, Python REPL >>>, tracebacks, ```bash code blocks, Unix file permission listings, man page headers, shebangs.
  3. Weighted structural scoring (term_score_v2): each pattern has a weight (1-3) and occurrences are capped. Documents need term_score_v2 >= 3 to be kept.
Weight Signal Max
3 Command prompts ($ cmd at line start) 9
3 SSH prompts (user@host:~$) 9
2 Python REPL, file listings, tracebacks, terminal code blocks, git/docker ops, Windows prompts, man pages 2-6 each
1 Install output, systemd units, shebangs, sudo commands 1 each

No indentation-based scoring. No context-blind command substring matching.

Result: 3.8% of the unfiltered data survives — from 61.3M rows down to 2.33M rows. Quality jumps from ~15% to ~98% genuine terminal/CLI content.

Schema

Clean subset

Column Type Description
text string Document text
term_score int32 Original v0.1 score (5-34)
term_score_v2 int32 Structural score from v0.2 filter (3+)

Unfiltered subset

Column Type Description
text string Document text
term_score int32 Original v0.1 score (5-34)

Stats

Clean (v0.2)

  • 2,334,414 rows | 4.6B tokens (Llama-3.2-1B tokenizer) | 11 GB
  • 62 parquet files, ~169-185 MB each, snappy compressed

Unfiltered (v0.1)

  • 61,341,278 rows | 359B tokens | 962 GB
  • 4,187 parquet files, ~180-240 MB each, snappy compressed
v0.1 Score Count %
5 39,025,201 63.62%
6 10,787,199 17.59%
7 4,063,886 6.63%
8 2,911,983 4.75%
9-14 3,594,547 5.86%
15-34 958,462 1.56%

Use case

Upsampling agentic-adjacent data during pretraining. The clean subset is recommended for most use cases. The unfiltered subset is available for researchers who want to apply their own filtering.

Downloads last month
49