Feeds:
Posts
Comments

Posts Tagged ‘electronic music’

I was in my local friendly music shop asking about other things and, for one reason or another, ended up playing with the Roland BOSS SY-300 Guitar Synth pedal. The guy in the shop was enthusing about how well it tracked guitars – including slides and vibrato and my  next thought was “yeah yeah, but how well would it work on a violin?” In my experience tracking a violin pitch is very very hard. I’ve seen nothing that would do it reliably in either hardware or software, and it’s something I’ve been looking for for a long time now.

To cut a long story short .. IT WORKS! it even tracks the octave violin (which is tuned an octave below a normal violin so, on a 5 string instrument, makes it go down to the bottom C string of a cello). I stayed in the shop for a couple of hours playing with my violins on the device and was simply astonished about how well it worked. It is possible to confuse it, either with bad technique or pulling too hard on the low octave C string, but those are hardly major problems for normal use. And by bad technique I don’t mean tuning – if you play out of tune, the SY-300 will simply play the pitch you hit, I mean not placing your finger cleanly on the string which makes a dull grinding note on the violin anyway. If you slide all the way down a string – the SY-300 will follow you, if you use wide or narrow vibrato – the SY-300 will follow you. If you play loud to soft to loud in a single bow stroke – the SY-300 will follow you.

Amazing.

The guy in the shop thought it worked even better with violin that guitar because of the expressive effect of the bow on amplitude, and the ability to play long notes easily, it turns the SY-300 into a very expressive synth. It’s interesting to note that playing a synth via a violin (or guitar for that matter) doesn’t sound like playing a synth from a keyboard, it transfers the intrinsic ‘feel’ of the instrument onto the sounds made by the synthesizer – so in no way is it a replacement for a keyboard synth, it’s something totally different.

You need good synthesis knowledge to get the most out of the pedal, a lot of the factory presets are very guitar-orientated, made for a plucked instrument and often with lots of distortions added, so to get the best from a violin you need to get in there and make your own patches. As a violinist who also plays synthesizers this is easy enough for me, but people less familiar with subtractive synthesis might find it hard work to get what they want from it. This really is an expert’s device/

The architecture is slightly odd. It has 3 oscillators (with the standard virtual-analogue waveforms) each with its own filter, LFO and sequencer. Yes, the LFOs, filters and sequencers are per-oscillator! There are also 3 global LFOs (called Waves) that can be applied to the built in effects as well as the oscillator parameters. There are 4 effects slots which can be placed almost anywhere you like on one of two synth busses or the dry channel, and are of very good quality – as you would expect from a BOSS device. There is a good range of the usual effects, delays, reverbs, phasers, flangers & distortions – all with a good range of options. And also there are combined effects (delay+reverb for example) so you can make full use of those four slots. Most of the parameters of the effects can be controlled from the Wave LFOs. Although the way you configure those is rather clunky.

There are a few downsides. While it has MIDI in & out sockets (including USB) it does not send or receive MIDI notes, only control change and program change. I would also have liked more waveforms than just the standard saw,triangle, sine, square and maybe some interaction between the waveforms (eg FM). Also the software editor doesn’t work on Mac OS/X Sierra. Even the driver (which is supposed to work) crashed my system … and WHY OH WHY do MIDI devices need drivers anyway when they should just be class-compliant?! sigh.

But generally I think it’s an amazing device and if you’re a violin player who’s also into synthesis I strongly recommend you have a look at it.

I made a video about it with more information and examples

Read Full Post »

As I blogged earlier I’m enjoying using the Roli Seaboard with the Waldorf Blofeld hardware synth. So I thought I’d share how I create patches that make good use of the capabilities of the Roli with the Blofeld. In the video below I take a very basic sawtooth wave and turn that into an expressive (if still not especially beautiful) patch in the Blofeld. You can apply this knowledge to your own patches and make them ‘Roli Aware’ so that you can play them expressively.

The basic steps are as follows:

  • Set the oscillator bend range to 24 so that you can slide up and down the Seaboard. The Blofeld doesn’t allow you so bend more than 2 octaves so it works fine on the Seaboard 24 but might not be quite as good on the 49.
  • Add a low pass filter and set the cutoff down to 20, and allow the modwheel to modulate it. The Roli sends CC74 for the ‘slide’ dimension which is not very useful on the Blofeld so I mapped it to the modwheel (CC 1) using the Reaper CC mapper as shown below. Other DAWs might have similar capabilities. It’s quite easy to do in Max or Pure-Data too.ccmap
  • The Roli starts the slide with value 64* (the middle of the 0-127 range) when you press a keywave regardless of where you start, and modulates it down or up from there – so be aware of this when you create your patch. It needs sound good with the modwheel halfway. This is why I start the filter at 20 and not 0.
  • Set the amplitude envelope attack and release to 64.
  • Set the mod matrix to Velocity modulating AE attack at -64. This allows you to press the keywave gently to get a slow attack, and quickly to get a fast attack
  • Set the mod matrix to Release Velocity (Rel.Velo) modulating AE Release at -64. Now when you release a keywave quickly you get a quick release and slowly you get a slow release. These two options allow you to play legato and staccato with the same patch. I love this feature!
  • Set the mod matrix Pressure to modulate Volume +64 – this allows you to increase the volume just by pressing harder on the keywave.
  • Set the Amplifier volume to 0. Otherwise the Pressure modulator won’t do anything as it can’t make the volume any louder than 127.
  • Save the preset

That’s a basic preset that will work with the Roli in single channel mode. To really get the best out of it you need to create a Multi with the same patch in slots 1-10. That way the Roli can send a note on each MIDI channel and they can act independently.

Be sure that the multi preset you use has the MIDI channels differently for each part number – this is the default so that channel 1 goes to Part 1, channel 2 to part 2 etc. You also need to set the MIDI channel (in Global options) to Omni, or the Blofeld will only listen on one MIDI channel. I forgot to mention this in the video.

And that’s it! Even with a bare sawtooth that is quite fun, but once you get your own really nice sounds you can exploit to potential of the Roli Seaboard to it’s full, expressive maximum. Of course you can, and should, experiment with the values shown here they are intended as a starting point for making your own expressively played sounds.

Here’s the video showing the whole process from start to finish, along with a completed patch:

  • If you set the Seaboard to MPE mode (using the dashboard) then the starting point for the ‘slide’ dimension will depend on where you press the keywave. I don’t think this is mentioned in the manuals, Rol support told me about it!

Read Full Post »

I was playing with the wonderful KMI QuNeo controller the other day and started experimenting with bending synth notes and/or filters using the pressure and X/Y positions on the pads – I’m a violinist, it’s what we do! This works really well for single notes but because things like bends and filter positions are a single CC sent to affect all of the notes played on a synth it’s much less useful when playing more than one note. Worse: if you move one finger then another, the CC value jumps between the two positions giving you a very glitchy sound. Some might like this effect but it’s not what I’m going for.

So, remembering that MIDI has an ‘aftertouch’ option I started looking into that. I didn’t really have a clear idea what aftertouch was, I’d played a StudioLogic Sledge2 synth in a shop and felt the extra travel on the keyboard and thought that seemed to be what I needed. However it turns out that there are TWO types of aftertouch.

‘Channel pressure (aftertouch)’ is pretty much just another modwheel-type controller – but usually controlled from the keys themselves which does make it handier than a modwheel when performing. It affects the whole synth though and therefore suffers from the same problem as a modwheel or global CC. This is the what most manufacturers seem to mean when they ‘support aftertouch’ I found out – including the Sledge. ‘Polyphonic Pressure (aftertouch)’ is the thing I wanted – and it’s much harder to find, in both synths and keyboards. The only synth I have that supports it is the Waldorf Blofeld (ah Blofeld, how do I love thee, let me count the ways). To test this I wrote a small Max patch that added Poly Pressure messages to the last note played on the keyboard, and controlled it from an on-screen slider (see video below). Using this I could bend one note in a chord while the other stayed the same. Yes!

The problem now was that the QuNeo, while being a very flexible controller, is actually very limited in what MIDI messages it will send. Basically it only sends note and CC messages – and they have to be on the same channel for each pad. Contrast this with the SoftStep2 (my first KMI purchase) that will send almost anything. Luckily I’m a programmer and have been working with my Raspberry Pi system for a while now so I wrote some code that converts the CC numbers sent by the QuNeo to PolyPressure messages.

The way this works is that all of the pads send notes when pressed and send the same CC number with pressure information, all on MIDI channel 10 (eg I press pad 4, it sends note on for MIDI note 44, and continuously sends CC 44 with the value of my finger pressure). The Raspberry Pi code then intercepts any channel 10 messages, converts any notes back to channel 1, and converts all CCs to channel 1 Poly Pressure messages. The sliders and knobs still send their existing CCs on channel 1 so that they can still do the normal global synth control things such as filter and volume changes.

I don’t know why Poly Pressure is so underutilised, I really like the results I can get with this system and really wish there were more polyphonic controls available, but I intend to make best use of the one I have, even if it’s only on one synth. It would also be nice if more synths supported it. I know some old synths from Ensoniq and Roland do but I don’t ‘do’ vintage synths – for reasons of my own.

Not even any softsynths (that I can find, though as I’m not a fan of them I didn’t look very hard) seem to do this which is bizarre given that it’s a logical place for that sort of code to reside. The fact that Ableton seems to actively remove poly pressure messages from the MIDI stream probably isn’t helping this *sad face*.

I suspect it might be catch-22. Few synths support polypressure because hardly any keyboards do, and hardly any keyboards do because few synths do. While researching this I did find a keyboard that supports Poly Pressure (supposedly) and have ordered it off eBay – because it’s discontinued. When it arrives (and if it does what I hope) I’ll update this blog post with the results.

Maybe I should save up for a Roli Seaboard  (eek!)

Here’s a video I made showing my experiments. In the first part you can see the filter setting jumping around. In the second I made a Max patch that was a proof-of-concept – mainly to make sure that the Blofeld did what I hoped it was going to do (note there’s a bug in that patch the doesn’t take into account NOTE OFF messages). Lastly there’s the QuNeo running via my Raspberry Pi system that converts CCs to PolyPressure messages.

Read Full Post »

It was Winter 2011 when I started on Leaving Rome – at least that’s what the computer files tell me. I have recordings of Karen playing her various triangles dating from that time at least.

She had asked me if I could write something for triangles, an odd request to say the least but I like a challenge and those recordings were the start of the project that has spread over three years! From this distance in time I can’t quite remember what I intended to do with the recordings of Karen playing 4 different triangles, I do remember she lent me a book on how to play triangles (yes, such things exist) so that I might learn something of the techniques involved and the possibilities of the instrument.

After much research and even more gluing of bits of paper onto other bits of paper, that piece took shape and became Leaving Rome. It’s a hybrid piece of narration (from Juvenal’s Satire No.3) with instrumental backing with a fully instrumental section following each, based on the content of the preceding text. While the all-instrumental parts were scored normally, the narrative bits looked something like this:

Leaving Rome extract

Leaving Rome was performed live by Midnight Llama at the Hepworth Gallery in Wakefield in September 2013. Karen had semi-staged her reading parts so it was quite a fun thing to watch. As Midnight Llama make a point of not doing repeat performances I thought that was that. When Karen suggested making a film of the piece I thought this was not only an exciting new project but also a chance to revisit the work as I blogged last year.

And now the film is done! This has been one of the longest-running project of any of my pieces, the film itself has taken 15 months of work (off and on, mostly off) and I joked that it would have been quicker to do a stop-motion animation of it.

I learned a huge amount making this film, although Karen decided almost all of the visual content and narrative I had to learn to handle a video camera and to edit using Final Cut Pro, and also to tell Karen that I couldn’t do what she asked or, more likely, to find out just how to do it anyway – Karen doesn’t like getting ‘no’ for an answer.

Even though the piece wasn’t filmed in linear sequence I think it’s obvious which are the later parts and which the earlier (more primitive) ones – you can learn a lot in 15 months. Looking back at it there are things I know I could do better at, and also I have a shopping list of things I would like to buy before attempting the next video project (yes, there will be one) chief of which is a heavier tripod (Yorkshire is windy!) with a motorised pan and tilt head to avoid the terrible wobbliness of the pans in this film!

Still, I think we did a reasonably good job and I look forward to making more films as I’ve thoroughly enjoyed this one.

Here’s the finished film:

Read Full Post »

My Christmas present to myself this year was a Waldorf Blofeld synthesizer. It’s a polyphonic, multi-timbral digital synth in a small box for £309 – and it’s an amazing instrument!

I’d always been a little sceptical about digital hardware synths, I have a fair few software synthesizers that I’ve used on various projects and wondered what benefit a hardware synth gives you, apart from extra baggage to carry around when you’re gigging. Analogue synths I understand, they sound very different to their digital emulations, but a digi-synth … thats just code, same as what’s in your laptop, right?

Wrong.

The sound quality of the Blofeld is fantastic. It’s a fully rounded sound with no harshness and it sits in a mix like it has a right to be there. No spending ages EQing out the bits of the sound you don’t like. OK, I’m not seasoned expert on this subject I admit, so read Stuart’s Russell’s words on his Waldorf Streichfett which he bought at the same time, then come back.

OK, convinced? Maybe not but I reckon if you try one you might be.

I got the ‘desktop’ version of the Blofeld rather than the keyboard one as I’m (still) not really a keyboardist and already have a controller keyboard and a fairly full studio, which also now includes a 5 foot giraffe, but that’s another story.

Because the Blofeld is such an amazingly capable synth it’s quite complicated to work from its front panel. Unlike most analogue synths it’s not really possible to bring out all of the controls onto buttons, knobs and sliders as the case would be HUGE and unwieldy, so the instrument has a small number of knobs and buttons that access all sorts of menus. This makes it very tempting to set up presets and just play those, which I don’t think is really in the spirit of synthesizer playing. You can (just. The Blofeld is the white thing underneath the MicroBrute in the bottom-left inset) see me attempting to manage the menu system while playing live in the CSMA video which we filmed for the album “The Dog Ate My Max Patch“:

So I got to wondering how better to control the parameters of the sounds while playing live and ended up working with TouchOSC – an app for Android and Apple phones and tablets that can wirelessly send MIDI and/or OSC commands to a laptop. The clever thing about touchOSC is that you can design your own interface … this is perfect!

I worked out the sort of parameters I would like to be able to manipulate on the synth and laid them out on the TouchOSC screen. I included a small 1 octave keyboard and 2 XY controllers. The XY controllers are really useful in this context because some genius at Waldorf decided to have 4 arbitrary-assignable parameters for each preset (called W, X,Y & Z) that are always controlled by the same MIDI CC numbers. So on some presets I can have WX modifying a filter sweep and resonance and on others an oscillator pitch (for example) with no extra programming on the tablet side – a bit like 4 extra modwheels in a way. The TouchOSC screen (download link) always sends the same MIDI codes to the Blofeld regardless of which voice is active.

Blofeld-touchOSCAs you can see, I’ve also included quite a few other things too, that can control useful bits of the synth, including the sustain pedal (top right) and the arpeggiator controls. The octave slider (bottom right) is the one thing that does not go direct to the Blofeld as I don’t know any way of changing the octave of a keypress inside the synth, so instead it drives a custom JS plugin that I wrote for REAPER that adds some multiple of 12 to the key number before sending it on. On the whole it’s a really flexible system!

I have 3 of these pages, all different colours so I easily know which page I’m on. These send to MIDI channels 1,2 & 3 and again these go straight (apart from the octave slider) to the Blofeld so that in ‘Multi’ mode you can play 3 voices at a time. I’ve found that this is a good number to work with both for keeping track of what is happening and keeping within the limits of the Blofeld’s DSP. Page 4 of the screen is just 2 2-octave (plus a bit) keyboards that control the upper two voices, I will probably mostly use the lower one for drones or arpeggiations as you can hear in the video below.

The other great thing about hardware digital synths is that they ‘fail-safe’. If you use a lot of synth plugins on a laptop and run out of steam on the CPU you get delays, jitters, pops, crackles and all sort of unpleasantness. On the Blofeld it will usually just either not play the not you’ve asked for or drop the first note in the list of ones its playing. The sound stays beautiful throughout.

Currently I’m using the laptop running REAPER to relay the TouchOSC commands from the tablet to the synth. But for really lightweight gigging I reckon (though haven’t tested it!) that it’s possible to build a small Arduino box that reads the Wifi signals from the tablet and relays them to a MIDI socket … maybe that’s a future project.

To prove a point, here’s me playing three voices of the Blofeld, using just the TouchOSC interface I made. It’s not editted in any way, you can even see me puzzling over why things are not quite behaving as I expected in places 🙂

Read Full Post »

Soundspiral outsideHere it is at last – the Soundspiral video!

It’s been about 18 months since I was asked to perform in the Soundspiral. I chose ‘Ada’ partly because it was a major new (at the time) piece I was very proud of, and partly because it was always meant to be quite an immersive piece that I thought would be appropriate for the Soundspiral. Since that time I have performed half of it in 7.1 and the whole thing twice in stereo in Second Life.

The Soundspiral version is more than all of those in several ways. Of course it’s in 52 speaker surround sound, but in the meantime I’ve been tweaking the parts and samples to make them better and convey the subject matter more clearly. I hope.

Of course, I didn’t generate the full 52 channels from my laptop into the spiral speakers, that would be insane … and impractical. For a start the rig doesn’t actually have 52 inputs but mainly it’s not meant to be addressed that way. The software system that drives the spiral (written by the hugely clever Daz Disley) can drive the speakers in spaces rather than individually. This gives the sound a great coherence and ensures that when things move, they don’t just disappear from one speaker (or set of speakers) but they move smoothly and naturally. It’s immensely impressive and gives the spiral a very clear and listenable sound.

From 2dgoggles, ‘The Client’ by Sydney Padua.

I ended up giving Daz 14 channels that were effectively 7 stereo sets. Dividing the spiral into 4 quadrants lengthwise and top and bottom sets. The last stereo set was for the live violin playing which I originally thought would move with me, but in the event that turned out to be unnecessary. Most of the played back samples were positioned using this system – one send for each stereo pair. There was very little movement of sounds involved, i wanted to create a space, rather than go ‘Hey, we can spin things round, isn’t that great!’ that some surround systems seem so keen on. The spacial idea was inspired by the Sydney Padua cartoons where Ada is lost in the internal workings of the Analytical Engine (see above) and this is depicted in the last 3 minutes of part 1.

A central pillar of the piece is a piece of text where Ada effectively predicts the ability of computers to compose music (soon followed by some automatically generated music – played while I change violins!).

The performance was part of the first Sonophilia festival in Lincoln and I am pleased to report that it drew a good crowd and several people came up to me afterwards to say how much they enjoyed it. I was pleased with my performance, and below is a video of the event for your enjoyment.

Read Full Post »